PS2 question

Can different textures use the same CLUT? In some games that would bring the bit per texel count down a lot.

I don't think so. But I'm really not sure.

I’ve heard the buffers on both GS and Flipper consists a large pool of DRAM with a small amount of SRAM as a kind of cache. So they should be about the same in latency. GS DRAM mem-cells probably takes up more space though.

I just checked a few tech articles to refresh my memory and GC's embedded ram is definitely 1T-Sram. 6.2ns sustained latency for both the 2mb frame buffer and the 1mb texture cache.

The only DRAM GC has is the 16mb auxiliary ram.
 
Squeak said:
How would it be possible to make an SRAM cell with only one transistor?

http://www.mosys.com/products/1t_sram.html


MoSys/NEC had collaborated on making 1T-SRAM a reality. NEC (I believe) manufactures (fabs) the 24mb of 1T-SRAM, Flipper (3mb embedded 1T-SRAM) and 16mb of DRAM for GameCube. If the news of NEC and Nintendo working on GameCube's follow up are true, maybe they will be using 1T-SRAM-Q or whatever MoSys has up its sleeve for 2005-2006 timeframe. I'm not sure but I also believe NEC fab'ed the processors/memory for N64 as well, so NEC getting a contract with Nintendo is very likely.

After being burned by Rambus memory in N64, Nintendo went with a memory technology that had a low latency and predictable performance. I find that the Nintendo/NEC/MoSys collaboration has paid off with GameCube's price/performance ratio. I've been a fan of MoSys for a while now. I still have my Tseng Labs ET6000 with MDRAM (Multibank DRAM created by MoSys).

Maybe someone can answer me this: does NEC own MoSys?
 
Tagrineth said:
But again, GCN has virtual texturing and S3TC which make its texture cache, for all intents and purposes, 6MB with 60GB/sec. :p

Isn’t PS2 almost ideally suited for virtual texturing? If you look at the 3DLabs description of virtual texturing, PS2 has all the features needed to do a successful implementation, except PS2s strong DMAC is on the CPU instead of on the GPU which I can’t imagine would make a big difference.
 
Maybe someone can answer me this: does NEC own MoSys?

No...

MoSys/NEC had collaborated on making 1T-SRAM a reality.

NEC had little do with it other than be one of their first major customers. Also they're not really SRAMs. They're DRAMs architected to mask cell refresh (the DRAM cells on the GS are actually somewhat similar in philosophy even though they aren't MoSys technology)...
 
Well, speaking realistically, GCN's texture cache is probably a good deal more efficient thanks to S3TC.

But wrt bandwidth, it depends how many textures are used. I said that without thinking of GCN's virtual texturing, for one, and ignoring S3TC.

Anyway, it all just depends on the number of textures used - let's say you have exactly 2MB of textures. On the PS2, you can throw them straight onto the cache and not give it a second thought, but on the GCN they would have to move around a whole bunch because they don't all fit in at once.

But again, GCN has virtual texturing and S3TC which make its texture cache, for all intents and purposes, 6MB with 60GB/sec.

Tagrineth, if the GC has to move it's textures around a whole bunch because the all don't fit in the cache at once wouldn't that greatly reduce the cache's effiency? oh and anouther thing why would the GC use more textures then it's cache can hold of 1mb disregarding S3TC.
 
archie4oz said:
NEC had little do with it other than be one of their first major customers. Also they're not really SRAMs. They're DRAMs architected to mask cell refresh (the DRAM cells on the GS are actually somewhat similar in philosophy even though they aren't MoSys technology)...

Thanks, that does clear up some things that I've been wondering about. :D

And speaking of eDRAM, will Sony be using that again for PS3; what kind of bandwidth do you think they can get out of that (assuming a 65nm process)?
 
Wildstyle said:
Well, speaking realistically, GCN's texture cache is probably a good deal more efficient thanks to S3TC.

But wrt bandwidth, it depends how many textures are used. I said that without thinking of GCN's virtual texturing, for one, and ignoring S3TC.

Anyway, it all just depends on the number of textures used - let's say you have exactly 2MB of textures. On the PS2, you can throw them straight onto the cache and not give it a second thought, but on the GCN they would have to move around a whole bunch because they don't all fit in at once.

But again, GCN has virtual texturing and S3TC which make its texture cache, for all intents and purposes, 6MB with 60GB/sec.

Tagrineth, if the GC has to move it's textures around a whole bunch because the all don't fit in the cache at once wouldn't that greatly reduce the cache's effiency? oh and anouther thing why would the GC use more textures then it's cache can hold of 1mb disregarding S3TC.

First off, learn to use quote-/quote blocks.

Second, you just summed up what I said.

Third, because S3TC offers 6:1 compression. If textures take up 1/6th their normal space, logically you can thus hold six times as many textures.
 
Wildstyle said:
But wrt bandwidth, it depends how many textures are used. I said that without thinking of GCN's virtual texturing,
To me, virtual texturing implies that you have a 2 level store for your textures where there is a fast primary storage area (i.e. texture RAM) and a much slower secondary storage area (eg disk or perhaps AGP memory on a PC) for the majority of the textures. The HW would maintain a texture page table and load on-demand page-sized blocks from the 2ndary storage.

Does the GC really have this sort of functionality? (I'm assuming it has a smaller texel cache as well).
 
Simon F said:
To me, virtual texturing implies that you have a 2 level store for your textures where there is a fast primary storage area (i.e. texture RAM) and a much slower secondary storage area (eg disk or perhaps AGP memory on a PC) for the majority of the textures. The HW would maintain a texture page table and load on-demand page-sized blocks from the 2ndary storage.

Does the GC really have this sort of functionality? (I'm assuming it has a smaller texel cache as well).

Well technically I said it, not Wildstyle ;) But...

Yes, it does. It doesn't have to load entire textures into its cache (PS2 does, normally), it only has to load chunks that are visible. Otherwise textures are stored in main RAM or A-RAM.
 
Tagrineth said:
Well technically I said it, not Wildstyle ;) But...

Yes, it does. It doesn't have to load entire textures into its cache (PS2 does, normally), it only has to load chunks that are visible. Otherwise textures are stored in main RAM or A-RAM.
That just a cache then. The PS2 system is a scratch-pad.
 
Simon said:
I'm assuming it has a smaller texel cache as well
To best of my knowledge, embeded buffer IS the texel cache.

The docs don't seem to call it virtual texturing anyhow. But this is really more a matter of semanthics.
Virtual texturing as was 'defined' by 3dlabs, was applied to PC architecture. Seeing how GC is essentially a unified memory system, the term couldn't really fit without taking it a bit more loosely.

I mean - if you look at VT as fetching textures from "main/external memory" to "video/local memory", GC's implementation does fit ;)
 
I've always taken virtual texturing to be the analog of virtual memory, i.e. it appears that you have a large contiguous (fast) memory area when in fact you have a combination of smaller fast storage and a slow large storage areas. To simplify allocation in the smaller storage area, it is addressed via a TLB.
 
Would it be possible to chop textures up into manageable bites, and then set up a paging-loading scheme for them on PS2?

Here is what John Carmack had to say about virtual textures in 2000:
"Embedded dram should be a driving force. It is possible to put several
megs of extremely high bandwidth dram on a chip or die with a video
controller, but won’t be possible (for a while) to cram a 64 meg
geforce in. With virtualized texturing, the major pressure on memory
is drastically reduced. Even an 8mb card would be sufficient for 16
bit 1024x768 or 32 bit 800x600 gaming, no matter what the texture load."
(Whole paper: http://www.beyond3d.com/articles/bandwidth/docs/carmack.zip)

I was thinking, 24 bit 640x480 (which is probably the standard framebuffer size for PS2) is half of 32 bit 800x600, so shouldn’t around 1Mb be the optimal size of texture buffer for TV screen resolutions?
Something similar was discussed in this thread: http://www.beyond3d.com/forum/viewtopic.php?t=3706&start=0&postdays=0&postorder=asc&highlight=
 
Squeak said:
Would it be possible to chop textures up into manageable bites, and then set up a paging-loading scheme for them on PS2?
Yes, but you need to dice up your model as well and make sure each polygon doesn't extend across more than one texture segment. This is probably OK for static texturing but might be a bit of a pain in the proverbial for dynamic textures.

In fact, IIRC, there are OGL texture modes deliberately designed allow you to implement 'huge' textures by piecing them together in this way.
 
Squeak said:
Would it be possible to chop textures up into manageable bites, and then set up a paging-loading scheme for them on PS2?

Possible probably, practical and useful, almost certainly not.
Anytime you start to manage lowlevel graphics functionality with the processor you will pretty much become CPU bound.
 
Simon said:
I've always taken virtual texturing to be the analog of virtual memory, i.e. it appears that you have a large contiguous (fast) memory area when in fact you have a combination of smaller fast storage and a slow large storage areas.
Fair enough, but that also describes regular cache. :p
The only real difference is the scale we're doing it on (both in size and speed diferentia between different storage pools).
Again, I think GC's naming was simply refering to automatic use of main mem as storage - as I've noted it's not official naming used anyhow, afaik it was used in PR purposes alone.

Squeak said:
Would it be possible to chop textures up into manageable bites, and then set up a paging-loading scheme for them on PS2?
Well, it depends on what you consider 'manageable' bits.
Either way, it'd most likely be counter productive - most PS2 titles are CPU core limited most of the time to begin with, (not texture or geometry whatever else people want to believe), you're not really fixing much by adding a ton more CPU overhead.

In practical terms, 'manageable' bits probably don't go much beyond what some games are already doing - forcing artists to fit texture sizes to eDRam page size - manually :p
 
Fafalada said:
Well, it depends on what you consider 'manageable' bits.
Either way, it'd most likely be counter productive - most PS2 titles are CPU core limited most of the time to begin with, (not texture or geometry whatever else people want to believe), you're not really fixing much by adding a ton more CPU overhead.

This is probably a naive question, but couldn't the MMU be made to handle it? :oops:

Again Carmack:
The hardware requirements are not very heavy. You need translation
lookaside buffers (TLB) on the graphics chip, the ability to
automatically load the TLB from a page table set up in local memory,
and the ability to move a page from AGP or PCI into graphics memory and
update the page tables and reference counts. You don’t even need that
many TLB, because graphics access patterns don’t hop all over the place
like CPU access can. Even with only a single TLB for each texture
bilerp unit, reloads would only account for about 1/32 of the memory
access if the textures were 4k blocked. All you would really want at
the upper limit would be enough TLB for each texture unit to cover the
texels referenced on a typical rasterization scan line.

Doesn't the EE have plenty of TLBs as a part of its strong DMA?
I know they aren't on the GS but I can't see how that would matter

Okay I’ll stop bumping this thread now, its long enough as it is. :)
 
Back
Top