Predict: The Next Generation Console Tech

Status
Not open for further replies.
I'm not sure what Carmack plans to do with Tech 6. His original idea was of course sparse voxel octrees but a lot has probably changed:
- Rage development got too long, he's probably not going to scratch tech 5 and start again, so development will become more gradual
- SVOs aren't such a promising solution for the near future as standard polygon rasterization still clearly has a lot more to offer; then there are the prototype micropolygon renderers (to overcome rasterization weaknesses) and other new venues of research

Then again I think it's safe to say that storage space should increase with next gen. Deeper games already require double DVDs on the X360 and Rage could easily use even more space, on all platforms.
 
I'm not sure what Carmack plans to do with Tech 6. His original idea was of course sparse voxel octrees but a lot has probably changed:
- Rage development got too long, he's probably not going to scratch tech 5 and start again, so development will become more gradual
- SVOs aren't such a promising solution for the near future as standard polygon rasterization still clearly has a lot more to offer; then there are the prototype micropolygon renderers (to overcome rasterization weaknesses) and other new venues of research

Then again I think it's safe to say that storage space should increase with next gen. Deeper games already require double DVDs on the X360 and Rage could easily use even more space, on all platforms.

Jon Olick (ex-naughty dog and ex-id software) posted a comparison of Voxels vs Displacement mapping.

voxels_vs_dm.jpg


Voxel on the left Displacement Map on the right.


http://olickspulpit.blogspot.com/
 
pretty sure ed-ram is more expensive, and will probably need tiling etc because it's never enough.
I can image Halo 5 running in 1764*894 instead of 1920*1080 and not being able to use a real HDR frame buffer because it's can't fit in the ed-ram without severe tiling penalties. Like last time really.

ed-ram is expensive, and will never be enough, aside from the fact that it is best used in non-deffered renderers, which are pretty much old news today.
gddr5 will benefit any renderer, and will also fit any 2011 imaginable frame buffer

In this generation the Xbox 360 had 10MB of ED-RAM and the output resolution is typically 1280 by 720. In the next generation we could easily see 40MB of ED-RAM with an output resolution of only 1920 by 1080 maximum which is only a doubling. If anything there would be less need for tiling and not more.

40MB is actually kind of conservative considering the overall increase in density from 90nm to 28nm which is the smallest node I know of which can be used to produce ED-RAM. I would bet that 64MB would be no more expensive in reality for a next generation console than 10MB was for the Xbox 360.
 
In this generation the Xbox 360 had 10MB of ED-RAM and the output resolution is typically 1280 by 720. In the next generation we could easily see 40MB of ED-RAM with an output resolution of only 1920 by 1080 maximum which is only a doubling. If anything there would be less need for tiling and not more.

40MB is actually kind of conservative considering the overall increase in density from 90nm to 28nm which is the smallest node I know of which can be used to produce ED-RAM. I would bet that 64MB would be no more expensive in reality for a next generation console than 10MB was for the Xbox 360.

It's true that they targeted 720p, but for example (some technology experts claim that) they did not take into account 128bit color space rendering and the penalties that tiling would cause.
Also the (almost) free MSAA that the ED-RAM provided, did not work that good in some engines, where it was only applied in early render-passes. It looks like in the future, MSAA will be even less applicable. So the ED-RAM loses some of it's use.

I did not calculate a 1080p 128bit frame buffer but I can image that by the time the next xbox is released and during it's lifecycle, it's possible that there will be new ideas which would exceed the 40 of 64MB frame buffer. But maybe tiling will have less of a penalty then, so it's still possible to have the benefits of ED-RAM in 2015 :)

Btw I always almost forget the bandwidth and filtrate of ED-RAM, which most certainly will always have a place in the future :cool:
 
Jon Olick (ex-naughty dog and ex-id software) posted a comparison of Voxels vs Displacement mapping.

Yeah, and so what? Both are procedural and the comparision can't be applied to any atual asset that's not generated on the fly.

Voxels with procedural textures have been used for many years for smoke/fire/dust effects, but there are also other approaches. This however has nothing to do with anything else, like environments, props or characters (unless they're made of dust)
 
If that was true, then harddiscs should have abysmal seek times, too. And 360s use CAV, so the drive doesn't change RPMs when reading different sectors.

Hard drives do have abysmal seek times, relative to SSDs, RAM, and other non-spinning media. I think you missed the point of the post though...

Seek times aren't any better, that is a product of it being a spinning drive with plastic media. Transfer rates are higher, but that is a product of data density. You're still going to have ~100ms seek times, its the nature of a plastic media that must be spun up slowly so it doesn't unbalance and/or explode.

plastic media.

That.
 
The extra capacity is important because a John Carmack RAGE engine (ID Tech 5) game wouldn't have to compress the data as much so the texture resolution would be higher. RAGE has 100 GB of data alone forjust the textures.

If you (or someone else) can provide it, I would love a link to this info!
 
It looks like in the future, MSAA will be even less applicable. So the ED-RAM loses some of it's use.

Nearly everyone is working on post-process "AA", like edge detection. They are crap compared to proper MSAA, but they are certainly more friendly to your framebuffer.

I did not calculate a 1080p 128bit frame buffer but I can image that by the time the next xbox is released and during it's lifecycle, it's possible that there will be new ideas which would exceed the 40 of 64MB frame buffer.
Color data for 1080p 128bit is 32MB, perhaps another 10MB or so for depth and stencils?

20MB is what you could painlessly put on-chip with the GPU/CPU on first iteration. If they are comfortable with more chips in the console, having 64MB or more wouldn't be wholly insane.
But maybe tiling will have less of a penalty then, so it's still possible to have the benefits of ED-RAM in 2015 :)
On ATi designs, tiling has the cost of duplicating what you spend on geometry. They could certainly tile on next gen, but this would mean that we wouldn't get much better geometry.
 
20MB is what you could painlessly put on-chip with the GPU/CPU on first iteration.

That will not happen. There is no standard CMOS process (for CPUs) that allow EDRAM as far as I know. Having a design that allow multiple foundries is essential for a console manufacturer for cost/risk reasons.
 
If you (or someone else) can provide it, I would love a link to this info!

There were some discussions about whether or not Rage would be on two or three discs on 360. If you look for that, you're bound to find something ...
 
That will not happen. There is no standard CMOS process (for CPUs) that allow EDRAM as far as I know. Having a design that allow multiple foundries is essential for a console manufacturer for cost/risk reasons.

What exactly is the difference between EDRAM and L2/L3 cache as we know it from the CPU world, manufacturing wise? I think EDRAM might be easier to manufacture, if it can be clocked slower than the CPU.

In any case, the Power 5 architecture (much like that used in the X360's Xenon) supported 36MB L3 cache on a daughter die back in 2004, you would think that this capacity has at least been quadrupled since then: http://en.wikipedia.org/wiki/POWER5


There were some discussions about whether or not Rage would be on two or three discs on 360. If you look for that, you're bound to find something ...

After Googling it appears Carmack mentioned something about having 100GB of art and texture data which had to be shoehorned down to fit a 20GB package in his 2011 Quakecon Keynote.

Btw, this seems to me to negate all possible explanations how over 4GB of RAM for consoles would be an overkill due to production costs required by that size.

Here we have ID Software with 100GB of beautiful textures (reportedly) and no RAM that can fit this to show them off in their original beauty... instead, they had to use time, effort and money to optimize that sh*t down tight for the 256MB to 384MB framebuffers offered by the current gen consoles - an effort which probably proved much more costly and soul-numbing than the original creation of said textures, no doubt.
 
Last edited by a moderator:
The "100GB" quote was taken from Carmack's keynote from this year's QuakeCon. He also said that id is considering releasing a downloadable set of textures of that quality for just one level (on PC) so that people could see the difference for themselves.
 
Here we have ID Software with 100GB of beautiful textures (reportedly) and no RAM that can fit this to show them off in their original beauty... instead, they had to use time, effort and money to optimize that sh*t down tight for the 256MB to 384MB framebuffers offered by the current gen consoles - an effort which probably proved much more costly and soul-numbing than the original creation of said textures, no doubt.

You've managed to completely and utterly miss the point of MegaTexure.
 
What exactly is the difference between EDRAM and L2/L3 cache as we know it from the CPU world, manufacturing wise? I think EDRAM might be easier to manufacture, if it can be clocked slower than the CPU.
CPU caches are typically made from SRAM, which is a completely different device from (e)DRAM.

To put it simply, SRAM is made of (more or less) the same kind of transistors as the CPU itself is, and you typically need 8 of them for fast-speed caches, or 6 for slow ones. SRAM is known to be especially fast to read and write, but it takes a lot of space ( = is expensive) and is very powerhungry.

eDRAM is just DRAM that is made on a same chip as computing resources (e is for embedded), and consists of a pair of a capacitor and a transistor. It is ~3-4 times as dense as SRAM ( = you can fit more in the same space, so it's cheaper), but it has the very slow access latency of DRAM ( = makes it less useful for caches, BUT as you can just make the interface wider, this isn't a problem for throughput), and as it's not just made from transistors, you need a special process to manufacture them.

Here we have ID Software with 100GB of beautiful textures (reportedly) and no RAM that can fit this to show them off in their original beauty... instead, they had to use time, effort and money to optimize that sh*t down tight for the 256MB to 384MB framebuffers offered by the current gen consoles - an effort which probably proved much more costly and soul-numbing than the original creation of said textures, no doubt.

You only ever need a fraction of that 100GB on the screen at once. On Megatexture, every surface has an unique texture. Do you think they have need for more than 4% of their game on screen at once?
 
CPU caches are typically made from SRAM, which is a completely different device from (e)DRAM.

To put it simply, SRAM is made of (more or less) the same kind of transistors as the CPU itself is, and you typically need 8 of them for fast-speed caches, or 6 for slow ones. SRAM is known to be especially fast to read and write, but it takes a lot of space ( = is expensive) and is very powerhungry.

eDRAM is just DRAM that is made on a same chip as computing resources (e is for embedded), and consists of a pair of a capacitor and a transistor. It is ~3-4 times as dense as SRAM ( = you can fit more in the same space, so it's cheaper), but it has the very slow access latency of DRAM ( = makes it less useful for caches, BUT as you can just make the interface wider, this isn't a problem for throughput), and as it's not just made from transistors, you need a special process to manufacture them.
Good description!

As far as I know only IBM uses EDRAM in high performance CPUs (the Power7+-series has 32 MB EDRAM cache) on a special custom process.
That is a crazy expensive server CPU not likely turning up in any console.
 
You've managed to completely and utterly miss the point of MegaTexure.

Are you implying that MegaTexture is in essence something more than just another streaming technique which introduces texture pop-ins? In that very same interview Carmack admitted that Rage will look horrible if someone decides to do a 360 degree turn on the X360 Arcade without the HDD. I'd say the HDD-containing systems do the very same thing, just somewhat faster - but still visibly drawing in higher details of textures as a scene is already being rendered.

Does everyone here really like texture pop-in that much that they're willing to see "virtual texturing" "built-in" as a "feature" rather than say, calling it a dirty hack that should only be used when the system's RAM capacity has been brought to its knees and you cannot optimize the texture set of a given area or level down to fitting it completely in RAM because it would look too ugly on the screenshots?

As far as I know only IBM uses EDRAM in high performance CPUs (the Power7+-series has 32 MB EDRAM cache) on a special custom process.
That is a crazy expensive server CPU not likely turning up in any console.

Except it already kind of did in X360 - Xenon is based on the very same Power architecture that the Power 7+ originates from ;)
 
Last edited by a moderator:
On Megatexture, every surface has an unique texture. Do you think they have need for more than 4% of their game on screen at once?

In fact proper virtual texturing requires even less runtime memory on its own - as the system will pull only the lowest necessary MIP level, it theoretically only needs only one texel per drawn pixel. Of course in practice id uses 128x128 tiles as the smallest element and there's overdraw as well, but I'd say the Megatexture itself probably only needs about 30-50 MB.

Now to speed up reads from even the hard drive, that's another story completely; so the rest of the console's memory is used for caching. But this is why I keep saying that even 4GB should be more than enough if virtual texturing is used in combination with a fast background storage.
 
Are you implying that MegaTexture is in essence something more than just another streaming technique which introduces texture pop-ins? I

Does everyone here really like texture pop-in that much that they're willing to see "virtual texturing" "built-in" as a "feature" rather than say, calling it a dirty hack that should only be used when the system's RAM capacity has been brought to its knees and you cannot optimize the texture set of a given area or level down to fitting it completely in RAM because it would look too ugly on the screenshots?

Honestly, it's very annoying that you make assumptions and draw conclusions with not even superficial knowledge of the technology involved in these issues. You do not contribute much here beyond thread derailment and pointless debates, I'd really suggest to do some reading for a while.
There are many threads here on B3D as well, complete with links to white papers and presentations, posts from actual developers and so on, all of which you probably should be a lot more familiar with before getting into those debates...
 
Are you implying that MegaTexture is in essence something more than just another streaming technique which introduces texture pop-ins?
Oh look, you're saying more RAM is better for textures again. Wow, thanks for that. Repeating the same point over and over without adding anything to your argument has finally got through and suddenly I see the light. 16 GBs of RAM for next-gen is a certainty. :yep2:

Except it already kind of did in X360 - Xenon is based on the very same Power architecture that the Power 7+ originates from ;)
No, it didn't, unless you count a Pentium 4 as the same as a Pentium (and similarly an i7 is the same thing as an 80286). There's far more to a processor than just its ISA.
 
Status
Not open for further replies.
Back
Top