Homebrew Textures block format

cebolleto

Well-Known Member
OP
Member
Joined
Mar 5, 2010
Messages
203
Trophies
1
Age
43
XP
2,516
Country
For a few days I have been trying to load textures into memory without luck

Looking into the code of 3dscraft I found a script named texconv.py which seems to do the job. I have translated that into C++ code and it seems to work, but there are a few issues:
- Textures are upside down
- Only square textures seems to be loaded properly

Could anyone explain how textures are supposed to be stored into memory?
Also just for curiosity I'd like to understand why the orders of pixels in the GPU is this weird...

Thanks!
 
  • Like
Reactions: Margen67

elhobbs

Well-Known Member
Member
Joined
Jul 28, 2008
Messages
1,044
Trophies
1
XP
3,030
Country
United States
For a few days I have been trying to load textures into memory without luck

Looking into the code of 3dscraft I found a script named texconv.py which seems to do the job. I have translated that into C++ code and it seems to work, but there are a few issues:
- Textures are upside down
- Only square textures seems to be loaded properly

Could anyone explain how textures are supposed to be stored into memory?
Also just for curiosity I'd like to understand why the orders of pixels in the GPU is this weird...

Thanks!
the width and height appear to be reversed in libctru. That still leaves stuff upside down. I was wondering if that has something to do with the rotated frame buffer.
 

cebolleto

Well-Known Member
OP
Member
Joined
Mar 5, 2010
Messages
203
Trophies
1
Age
43
XP
2,516
Country
Thanks, that almost did the job... I am still having issues with some resolutions
Does anyone know why the 3ds uses this format? I imagine it adds some kind of optimization but cannot image what...
 

elhobbs

Well-Known Member
Member
Joined
Jul 28, 2008
Messages
1,044
Trophies
1
XP
3,030
Country
United States
Thanks, that almost did the job... I am still having issues with some resolutions
Does anyone know why the 3ds uses this format? I imagine it adds some kind of optimization but cannot image what...
The dimensions need to be powers of two. I have not had issues within those constraints. If you are referring to the block format I think I saw something somewhere about increasing cache hits when reading the texture.
 

themperror

Well-Known Member
Member
Joined
Aug 12, 2009
Messages
181
Trophies
0
XP
367
Country
Netherlands
For a few days I have been trying to load textures into memory without luck

Looking into the code of 3dscraft I found a script named texconv.py which seems to do the job. I have translated that into C++ code and it seems to work, but there are a few issues:
- Textures are upside down
- Only square textures seems to be loaded properly

Could anyone explain how textures are supposed to be stored into memory?
Also just for curiosity I'd like to understand why the orders of pixels in the GPU is this weird...

Thanks!


Textures should always be a power of 2 in any dimension, 8x8, 16x16,32x32,64x64,128x128, 256x256, 512x512 (higher probably won't be needed on the 3ds (also considering speed)), in OpenGL and DirectX they CAN be rectangular, in the old days they had to be square though.. so 512x256 was allowed, But I don't know if it's allowed on the 3ds.

Textures are normally loaded in VRAM, in case of CPU rendering it's just a block of linear memory. so an array with all pixels in there will suffice.
 

neobrain

-
Member
Joined
Apr 25, 2014
Messages
306
Trophies
0
XP
730
Country
Textures are stored upside-down in memory indeed, i.e. the v=1 coordinate corresponds to the first data row in memory at the texture source address, whereas the v=0 coordinate corresponds to the last row in memory.

The same applies to framebuffers, fyi.
 

minexew

ayy lmao
Member
Joined
Mar 16, 2013
Messages
228
Trophies
0
XP
284
Country
Does anyone know why the 3ds uses this format? I imagine it adds some kind of optimization but cannot image what...

It's called the "native PICA format" and it either significantly simplifies the hardware design or helps a lot with cache utilization and thus performance (2nd possibility seems more likely IMO because the PICA uses some kind of block-based rendering).
Can't think of any other reason why they would use it.
 

elhobbs

Well-Known Member
Member
Joined
Jul 28, 2008
Messages
1,044
Trophies
1
XP
3,030
Country
United States
Textures should always be a power of 2 in any dimension, 8x8, 16x16,32x32,64x64,128x128, 256x256, 512x512 (higher probably won't be needed on the 3ds (also considering speed)), in OpenGL and DirectX they CAN be rectangular, in the old days they had to be square though.. so 512x256 was allowed, But I don't know if it's allowed on the 3ds.

Textures are normally loaded in VRAM, in case of CPU rendering it's just a block of linear memory. so an array with all pixels in there will suffice.
Rectangle textures are allowed. My experience has been that GPU_SetTexture has the width and the height reversed.
 

cebolleto

Well-Known Member
OP
Member
Joined
Mar 5, 2010
Messages
203
Trophies
1
Age
43
XP
2,516
Country
Sorry I am still having problems with this... I can load square textures and rectangle textures but only if theis high is bigger then their width. When the width is bigger (64x32 for example) it doesn't look ok

Could anyone post the C++ code? That would be very helpful
 

elhobbs

Well-Known Member
Member
Joined
Jul 28, 2008
Messages
1,044
Trophies
1
XP
3,030
Country
United States
I am sure this could be optimized but it take an 8 bit source and converts it to 16 bit. But it does show the proper tile ordering. It also flips the texture vertically. There is also a fair bit of dead code in there. Primarily for cross compiling for windows. I can confirm that it works with wide and narrow textures. There is a scale call but it just ensures power of 2 dimensions.

https://github.com/elhobbs/spectre3ds/blob/master/source/sys_textures.cpp#L329
 

cebolleto

Well-Known Member
OP
Member
Joined
Mar 5, 2010
Messages
203
Trophies
1
Age
43
XP
2,516
Country
I have been taking a look, but still the same problem. It doesn't seem to work with textures that are 32x64 or 64x32
 

cebolleto

Well-Known Member
OP
Member
Joined
Mar 5, 2010
Messages
203
Trophies
1
Age
43
XP
2,516
Country
Yes, I know... I was already swapping the width and height when calling GPU_SetTexture. The weird thing is that I also need to do it when calling parseTileTrans16 which is not what you had there
 

elhobbs

Well-Known Member
Member
Joined
Jul 28, 2008
Messages
1,044
Trophies
1
XP
3,030
Country
United States
Yes, I know... I was already swapping the width and height when calling GPU_SetTexture. The weird thing is that I also need to do it when calling parseTileTrans16 which is not what you had there
That is bizarre. I do not do that and it works fine for me. I am curious. Do you have any code that I could look at to see how you are using this?
 

cebolleto

Well-Known Member
OP
Member
Joined
Mar 5, 2010
Messages
203
Trophies
1
Age
43
XP
2,516
Country
No, yeah... you were right... after cleaning my code a bit it turned out I was swapping width and height on a previous call and so I had to swap again
Now everything makes sense

Thanks a lot for your help elhobbs. Now I can continue working on this :)
 

cebolleto

Well-Known Member
OP
Member
Joined
Mar 5, 2010
Messages
203
Trophies
1
Age
43
XP
2,516
Country
Sorry, I forgot to post the final code for this for anyone who can need it
Code:
void ReorderImageData(u8* src, u8* palette, u8* dst, int width, int height, void(copyFunc)(u8*, u8*, int, u8*, int))
{
    static int tile_order[] = {
         0,  1,  8,  9,  2,  3, 10, 11,
        16, 17, 24, 25, 18, 19, 26, 27,
         4,  5, 12, 13,  6,  7, 14, 15,
        20, 21, 28, 29, 22, 23, 30, 31,

        32, 33, 40, 41, 34, 35, 42, 43,
        48, 49, 56, 57, 50, 51, 58, 59,
        36, 37, 44, 45, 38, 39, 46, 47,
        52, 53, 60, 61, 54, 55, 62, 63
    };


    int idx = 0;
    int i, j;
    for(int y = 0; y < height; y += 8)
    {
        for(int x = 0; x < width; x += 8)
        {
            for(int k = 0; k < 64; ++ k)
            {
                i = (tile_order[k] % 8);
                j = (tile_order[k] - i) / 8;

                int src_idx = (height - (y + j) - 1) * width + (x + i);

                copyFunc(src, palette, src_idx, dst, idx);
       
                idx ++;
            }
        }
    }
}

where copyFunc is a pointer to a function similiar to these
Code:
void copyRGB(u8* src, u8* palette, int src_idx, u8* dst, int dst_idx)
{
    dst[dst_idx * 3 + 0] = src[src_idx * 3 + 2];
    dst[dst_idx * 3 + 1] = src[src_idx * 3 + 1];
    dst[dst_idx * 3 + 2] = src[src_idx * 3 + 0];
}

void copyRGBA(u8* src, u8* palette, int src_idx, u8* dst, int dst_idx)
{
    dst[dst_idx * 4 + 0] = src[src_idx * 4 + 3];
    dst[dst_idx * 4 + 1] = src[src_idx * 4 + 2];
    dst[dst_idx * 4 + 2] = src[src_idx * 4 + 1];
    dst[dst_idx * 4 + 3] = src[src_idx * 4 + 0];
}
 

Site & Scene News

Popular threads in this forum

General chit-chat
Help Users
  • Psionic Roshambo @ Psionic Roshambo:
    Tandem even
  • The Real Jdbye @ The Real Jdbye:
    i think i heard of that, it's a good idea, shouldn't need a dedicated GPU just to run a LLM or video upscaling
  • The Real Jdbye @ The Real Jdbye:
    even the nvidia shield tv has AI video upscaling
  • The Real Jdbye @ The Real Jdbye:
    LLMs can be run on cpu anyway but it's quite slow
  • BakerMan @ BakerMan:
    Have you ever been beaten by a wet spaghetti noodle by your girlfriend because she has a twin sister, and you got confused and fucked her dad?
  • Psionic Roshambo @ Psionic Roshambo:
    I had a girlfriend who had a twin sister and they would mess with me constantly.... Until one chipped a tooth then finally I could tell them apart.... Lol
  • Psionic Roshambo @ Psionic Roshambo:
    They would have the same hair style the same clothes everything... Really messed with my head lol
  • Psionic Roshambo @ Psionic Roshambo:
    @The Real Jdbye, I could see AMD trying to pull off the CPU GPU tandem thing, would be a way to maybe close the gap a bit with Nvidia. Plus it would kinda put Nvidia at a future disadvantage since Nvidia can't make X86/64 CPUs? Intel and AMD licensing issues... I wonder how much that has held back innovation.
  • The Real Jdbye @ The Real Jdbye:
    i don't think nvidia wants to get in the x64 cpu market anyways
  • The Real Jdbye @ The Real Jdbye:
    you've seen how much intel is struggling getting into the gpu market
  • The Real Jdbye @ The Real Jdbye:
    and nvidia is already doing ARM
  • The Real Jdbye @ The Real Jdbye:
    i don't think they want to take more focus away from their gpus
  • Psionic Roshambo @ Psionic Roshambo:
    Yeah I think Nvidia s future lays in AI GPU acceleration stuff if they can get that going it's going to be super interesting in the long term
  • Psionic Roshambo @ Psionic Roshambo:
    AI assisted game creation might become a thing
  • Psionic Roshambo @ Psionic Roshambo:
    At least that's something I think would be pretty cool.
  • Psionic Roshambo @ Psionic Roshambo:
    Don some VR glasses and gloves and talk to the computer and paint entire worlds
  • Psionic Roshambo @ Psionic Roshambo:
    "OK Cortana I want that mountain a little taller and more snow on top, and I would like some random ancient pine forest around the bottom"
  • Psionic Roshambo @ Psionic Roshambo:
    "Now we need a spring fed river flowing down the north side and add some wild life appropriate for the biome"
  • Psionic Roshambo @ Psionic Roshambo:
    Many TBs of assets and the programming of something like that is going to be tough but I think it's something we might see in 20 years maybe sooner
  • The Real Jdbye @ The Real Jdbye:
    @Psionic Roshambo AI assisted game creation is kinda already here, there was recently that AI that can turn any 2D image into a fully modeled 3D object, it's not perfect, but it's a starting point, beats starting from zero
  • The Real Jdbye @ The Real Jdbye:
    before that there was one to generate a fully modeled scene from a 2D image
  • The Real Jdbye @ The Real Jdbye:
    but most recently, there was one that actually generates a working unity scene with terrain and textures already set up that you can import right into unity, that's a huge time saver right there
  • The Real Jdbye @ The Real Jdbye:
    and using LLMs to generate NPC dialogue and even dynamically generated quests is something i'm sure is already happening
  • The Real Jdbye @ The Real Jdbye:
    will just take some time for games made using those things to be completed and released
    The Real Jdbye @ The Real Jdbye: will just take some time for games made using those things to be completed and released