The pros and cons of eDRAM/ESRAM in next-gen

I was wondering if someone could help me understand what TRAM is ...

Basically I'm a developer and im getting my head around the new features in DirectX11.2 particularly Direct2D and its new "Block Compression DDS" feature.. (video at 37minute mark onwards - http://channel9.msdn.com/Events/Build/2013/3-191 )

Basically in Dx11.2 it allows developers to save up to 80% disk footprint, as well as improved GPU resource utilization and faster GPU load times.. eg. 8mb image asset can be reduced to 0.9mb dds format.

All these DDS compressed textures are native compressed and handled on the HW i.e its not a Software based compression solution.

Anyway I tried to trace the Microsoft patent that covers this feature to see if it gave clues on how it was implemented at the HW level and I believe I may have found it "High dynamic range texture compression" - https://www.google.com/patents/US20...X&ei=nMYoUp6oN6aTigezxYDQDQ&ved=0CEkQ6AEwAzgo

Now I believe I understand how the patent defines this feature, basically there's a special "Texture Memory" that holds this DDS and that is what is accessed by the graphics processors (Figure 1, labelled 156 is the Texture Memory) - https://patentimages.storage.googleapis.com/US20120242674A1/US20120242674A1-20120927-D00000.png

This 156 "Texture Memory" holds the compressed texture(DDS) and is accessible from CPU/GPU & other coProcessors etc.

Its defined in the patent as this:

Now I assumed it was eSRAM, much like what we have in Xbox One on die in the main SoC ... BUT what is this TRAM, is that just a form of eSRAM?

Anyway sorry for the long winded question BUT i thought I'd ask the experts here ate Beyond3D, its clear there are many here that know what there talking about ;)

TRAM? There ram called tunneling based sram or tsram.
 
Basically I'm a developer and im getting my head around the new features in DirectX11.2 particularly Direct2D and its new "Block Compression DDS" feature.. (video at 37minute mark onwards - http://channel9.msdn.com/Events/Build/2013/3-191 )

Basically in Dx11.2 it allows developers to save up to 80% disk footprint, as well as improved GPU resource utilization and faster GPU load times.. eg. 8mb image asset can be reduced to 0.9mb dds format.

All these DDS compressed textures are native compressed and handled on the HW i.e its not a Software based compression solution.

That presentation is describing bog-standard BC1-BC3 texture compression (formerly known as DXT1-DXT5 in D3D, more generally known as S3TC), which has been around since D3D6. Just about every major 3D game available right now uses it, since it gives you huge memory savings over uncompressed textures.

DDS is a very simple container file format for texture data. It's basically just a header that tells you some metadata (size, texel format, number of mipmaps, etc.) followed by the raw texel data using the format specified in the header. It supports all of the runtime texture formats supported by D3D, which makes it useful for storing BC-compressed textures along with pre-generated mipmaps.

That bit about reducing an 8MB image to 1MB is quoting runtime sizes, since it's comparing raw uncompressed RGB data to BC1-compressed data. BC compression isn't really a method for saving disk space...in most cases compressing the JPEG or even PNG will result in a smaller file size. However if you used JPEG or PNG as your runtime asset format, you would have to decode, generate mipmaps, and BC compress at runtime which leads to longer loading times. So it usually makes sense to have an offline content build process that compresses your source images to a BC format and saves it as a DDS file.
 
I was wondering if someone could help me understand what TRAM is ...

Basically I'm a developer and im getting my head around the new features in DirectX11.2 particularly Direct2D and its new "Block Compression DDS" feature.. (video at 37minute mark onwards - http://channel9.msdn.com/Events/Build/2013/3-191 )

Basically in Dx11.2 it allows developers to save up to 80% disk footprint, as well as improved GPU resource utilization and faster GPU load times.. eg. 8mb image asset can be reduced to 0.9mb dds format.

All these DDS compressed textures are native compressed and handled on the HW i.e its not a Software based compression solution.

Anyway I tried to trace the Microsoft patent that covers this feature to see if it gave clues on how it was implemented at the HW level and I believe I may have found it "High dynamic range texture compression" - https://www.google.com/patents/US20...X&ei=nMYoUp6oN6aTigezxYDQDQ&ved=0CEkQ6AEwAzgo

Now I believe I understand how the patent defines this feature, basically there's a special "Texture Memory" that holds this DDS and that is what is accessed by the graphics processors (Figure 1, labelled 156 is the Texture Memory) - https://patentimages.storage.googleapis.com/US20120242674A1/US20120242674A1-20120927-D00000.png

This 156 "Texture Memory" holds the compressed texture(DDS) and is accessible from CPU/GPU & other coProcessors etc.

Its defined in the patent as this:

Now I assumed it was eSRAM, much like what we have in Xbox One on die in the main SoC ... BUT what is this TRAM, is that just a form of eSRAM?

Anyway sorry for the long winded question BUT i thought I'd ask the experts here ate Beyond3D, its clear there are many here that know what there talking about ;)

Someone on Twitter answered me ... assuming this is what the patent meant about TRAM that is :)

TRAM is an application-specific form of SOI-based memory technology. It can be used to create DRAM-like products, or SRAM-like products, with only minor changes in manufacturing and design. For DRAM-like implementations, its density is 4x to 5x that of traditional DRAM, though with notably lower power requirements per bit cell. For SRAM-like implementations, its density is 2x to 2.5x that of traditional 6T-SRAM

Just found that interesting seeing as MS is spending a lot of effort educating its developer ecosystem to code closer to the metal and be very aware of how power hungry their code is..
 
That presentation is describing bog-standard BC1-BC3 texture compression (formerly known as DXT1-DXT5 in D3D, more generally known as S3TC), which has been around since D3D6. Just about every major 3D game available right now uses it, since it gives you huge memory savings over uncompressed textures.

DDS is a very simple container file format for texture data. It's basically just a header that tells you some metadata (size, texel format, number of mipmaps, etc.) followed by the raw texel data using the format specified in the header. It supports all of the runtime texture formats supported by D3D, which makes it useful for storing BC-compressed textures along with pre-generated mipmaps.

That bit about reducing an 8MB image to 1MB is quoting runtime sizes, since it's comparing raw uncompressed RGB data to BC1-compressed data. BC compression isn't really a method for saving disk space...in most cases compressing the JPEG or even PNG will result in a smaller file size. However if you used JPEG or PNG as your runtime asset format, you would have to decode, generate mipmaps, and BC compress at runtime which leads to longer loading times. So it usually makes sense to have an offline content build process that compresses your source images to a BC format and saves it as a DDS file.

Yep thanks for your explanation, I actually know all about DDS and Block Compression in D3D :)

I have a few apps that use Direct2D drawn background imiages with applied effect graphs and with Windows 8.1 (Dx11.2) we can now embed DDS versions of these background assets :) ..

I was just interested in how the HW handled these DDS assets in the case of D2D, and it appears its stored in TRAM "Texture Memory". And I was just confirming if TRAM they mentioned in the patent was the same as eSRAM in Xbox One, which from what people have told me it is, just a different more dense version..
 
I seriously doubt that T-RAM is what the patent writer meant when using TRAM.
I think it's shorthand for memory that might have use in texel processing.

Patent language is designed to be as broad and non-specific as possible so as to encompass implementations that differ on one or two details.
The cell type for the memory is so completely irrelevant that it wouldn't figure at all.

From my limited time looking at the patent, I can't tell if the TRAM is a purpose-made pool that is physically part of the GPU or not, if it's the GPU's texture cache, a dedicated area of physically shared main memory, or something as mundane as the typical VRAM on a discrete board. I'm sure this is on purpose.

The ambiguity is to keep specific details from limiting the patent. For example the system could have any or more than these:
"such architectures may include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus, PCI Express (PCIE), integrated device electronics (IDE), serial advantage technology attachment (SATA), and accelerated graphics port (AGP)"

Note that the system in question may a TRAM, or may not. That TRAM may be optimized for texture access, or not.
 
I seriously doubt that T-RAM is what the patent writer meant when using TRAM.
I think it's shorthand for memory that might have use in texel processing.

Patent language is designed to be as broad and non-specific as possible so as to encompass implementations that differ on one or two details.
The cell type for the memory is so completely irrelevant that it wouldn't figure at all.

From my limited time looking at the patent, I can't tell if the TRAM is a purpose-made pool that is physically part of the GPU or not, if it's the GPU's texture cache, a dedicated area of physically shared main memory, or something as mundane as the typical VRAM on a discrete board. I'm sure this is on purpose.

The ambiguity is to keep specific details from limiting the patent. For example the system could have any or more than these:
"such architectures may include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus, PCI Express (PCIE), integrated device electronics (IDE), serial advantage technology attachment (SATA), and accelerated graphics port (AGP)"

Note that the system in question may a TRAM, or may not. That TRAM may be optimized for texture access, or not.

point taken ... thanks

p.s. I know not to take patents too seriously, most don't even end up in a finished product. BUT ive found they give me quite good insight at times!
 
I am a bit disappointed by this news. I mean, I have a 3DTV and for now the Xbox One doesn't support stereoscopic 3D.

Is the eSRAM to blame for the Xbox One not to support 3D Stereoscopic? I wonder... :smile2:

Whether Major Nelson felt slightly scatter-brained when he said that or not, he told months ago that the Xbox One would feature 3D support and 4k resolutions support.

To be honest, I am much more interested in the former. But it seems that I will have to wait 'til the first true 3D stereoscopic game is published on the Xbox One. It would be a day one buy for me... Sigh.

The only thing the Xbox One won’t have is stereoscopic 3D as they do not support it yet,” Jean said to us.

 
Last edited by a moderator:
I am a bit disappointed by this news. I mean, I have a 3DTV and for now the Xbox One doesn't support stereoscopic 3D.

Is the eSRAM to blame for the Xbox One not to support 3D Stereoscopic? I wonder... :smile2:

Whether Major Nelson felt slightly scatter-brained when he said that or not, he told months ago that the Xbox One would feature 3D support and 4k resolutions support.

To be honest, I am much more interested in the former. But it seems that I will have to wait 'til the first true 3D stereoscopic game is published on the Xbox One. It would be a day one buy for me... Sigh.

The only thing the Xbox One won’t have is stereoscopic 3D as they do not support it yet,” Jean said to us.
Read more at http://gamingbolt.com/sniper-elite-...-a-bit-slower-on-xbox-one#Vyb0EVpXuUwGYbJw.99
The only thing the Xbox One won’t have is stereoscopic 3D as they do not support it yet,” Jean said to us.
Read more at http://gamingbolt.com/sniper-elite-...-a-bit-slower-on-xbox-one#Vyb0EVpXuUwGYbJw.99
The only thing the Xbox One won’t have is stereoscopic 3D as they do not support it yet,” Jean said to us.
Read more at http://gamingbolt.com/sniper-elite-...-a-bit-slower-on-xbox-one#Vyb0EVpXuUwGYbJw.99
The only thing the Xbox One won’t have is stereoscopic 3D as they do not support it yet,” Jean said to us.
Read more at http://gamingbolt.com/sniper-elite-...-a-bit-slower-on-xbox-one#Vyb0EVpXuUwGYbJw.99
two machines. The only thing the Xbox One won’t have is stereoscopic 3D as they do not support it yet,” Jean said to us.
Read more at http://gamingbolt.com/sniper-elite-...-a-bit-slower-on-xbox-one#Vyb0EVpXuUwGYbJw.99


Hey, does the Xbox One have stereoscopic 3D support yet?

Is there any link where I can read more?
 
Hey, does the Xbox One have stereoscopic 3D support yet?

Is there any link where I can read more?
Not that I know of -I shared like 5 or 6 links in the previous post, :smile2: , jus jesting, gotta correct that.

I knew the console doesn't support 3D Stereoscopic movies as of yet, but games? Now I understand the dearth of options to play 3D games on the console, because there isn't a single game supporting Stereoscopic 3D.
 
Esram wasn't a mistake. Relying on esram rather than competing with sufficient gpu logic was. ESRAM was only supposed to be a panacea for RAM bandwidth. They misjudged the market.
 
Esram wasn't a mistake. Relying on esram rather than competing with sufficient gpu logic was. ESRAM was only supposed to be a panacea for RAM bandwidth. They misjudged the market.

The way I understand it is that there's less "gpu logic" because of the space the esram takes on the apu.
 
Exactly, it is a transistor budget and die space problem. If EDRAM had been available they could have included 128-192MB in the same space, or doubled it to 64MB and still increased the size of the GPU logic. Those tradeoffs are why the Xbox One's APU design is unprecedented. Spending that many transistors on that little memory is not usually seen as worthwhile. MS was looking with an eye towards future cost reductions and available EDRAM processes and decided they could get away with ESRAM to simplify both current production and future shrinks. Given the current results it is certainly arguable they chose poorly.
 
I think the lesson of the day should be that taking shortcuts usually end up in you wasting much more time and money in the long run.

Usually it's much better to take the most direct path to solving your problem than devising a complex solution.
 
I think the lesson of the day should be that taking shortcuts usually end up in you wasting much more time and money in the long run.

Usually it's much better to take the most direct path to solving your problem than devising a complex solution.

Exactly, that's what I learned after working for 15 years in the semiconductor industry....
 
I think the lesson of the day should be that taking shortcuts usually end up in you wasting much more time and money in the long run.

Usually it's much better to take the most direct path to solving your problem than devising a complex solution.
I'd argue that's exactly what MS did. They couldn't afford 8 GBs GDDR5 back when they were planning. DDR3 is too slow. For more BW they wanted a scratchpad. eDRAM limits manufacturers. Ergo the simple solution is SRAM.

I'd say the problem was not wanting to get tied into contract or supplier knots. Commit to eDRAM and the one manufacturer and get a far more ideal solution. The die space saving alone would surely have saved on cost.
 
Gotta wonder what the cost projections were if they had gone with an on-package, off-die approach with eDRAM again, especially considering how node shrinks are slowing down and becoming more expensive.

Also wonder what the trade-offs (cost/die-size/bandwidth) would have been with a true cache as opposed to scratchpad...
 
Thought it's already explained that MSFT at least break even on the X1s, part of their design goals.

That is only because they forced a $100 premium onto the market. If they entered it at parity with PS4 their bottom line would be getting smoked.... XB1 isnt a bad design. Its just too conservative a design, underpowered relative to its competition, mis-marketed. Even as someone who enjoys the capabilities of the box itself, I will say that as time goes forward it will probably flop.

I think the 3rd party title differences as well as the price differences will see the PS4 begin to pull away sooner than later. Honestly, in terms of power in their package they should be priced BELOW the PS4. Don Mattrick and the Xbox design team SHOULD have been fired as Xbox represents their main tentpole in the consumer space and they botched it.
 
They didn't expect 8GB of fast RAM to be affordable for launch, so they went another way. Whether that's bad luck or incompetence or something inbetween, I don't know. Sony didn't seem to be confident about 8 GB either, but in the end it worked out for them. Without know how that kind of industry projection is done, I would think Sony just took a bigger risk in their design and it worked out for them in the end. They were fully prepared to go with 4GB, which I think would have been a problem.
 
Back
Top