Predict: The Next Generation Console Tech

Status
Not open for further replies.
Seriously though, if you can take the chips in the iPad 2 and somehow employ them in a massively paralel manner, shouldn't you be able to use 50-100x the chips in the iPad 2 at 200 watts, depending on how much you lose in active cooling and increased data-travel between the chips?
 
Cost increase in a non-linear way as power increase (cooling need, boxing costs, shipping costs).
I really doubt we will see 200W console, more likely 100~150W from MS and Sony, and 75~100W from Nintendo
 
Anyone now believe that Cafe's GPU will be a modified, custom RV740 with EDRAM?

Nope.
That would result in unnecessary performance, unnecessary power consumption and unnecessary costs.

Sorry, it's just how Nintendo thinks.
 
Anyone now believe that Cafe's GPU will be a modified, custom RV740 with EDRAM?

People need to get over the edram thing. It's not a good idea, with deffered rendering taking hold. Besides that you need huge, expensive amounts of it to operate at HD (1080P+) resolutions.

I bet you wont see it in any console again ever.
 
I wonder if Nintendo took any considerations about deferend rendering fot its GPU(?) personally I hope so as I like how it looks.
 
Pardon, but I thought EDRAM was a potential solution to the bandwidth problem. (There was a dissenter to that discussion though. The dissent based on developers finding a way to work within a limitation and not that bandwidth wasn't a limitation w/o EDRAM.)
 
Pardon, but I thought EDRAM was a potential solution to the bandwidth problem. (There was a dissenter to that discussion though. The dissent based on developers finding a way to work within a limitation and not that bandwidth wasn't a limitation w/o EDRAM.)

It probably is, but at which cost? :???:
 
:LOL:

I can see you're out of your mind.

Or you miss the most obvious solution: if you shift the release date by four years, then you can start with the same power budget like the first rev PS3 , with the same overall performance improvement ( 10 * faster machine)
So, PS4 in 2015, or a 2-3* more powerful machine earlier, in WII style :)
 
People need to get over the edram thing. It's not a good idea, with deffered rendering taking hold. Besides that you need huge, expensive amounts of it to operate at HD (1080P+) resolutions.

The problem with the edram in the 360 is that: 1.) it's too specialized, it is only good for framebuffer data, and works best with forward renderers which are going out of style as you point out. 2.) There is not enough of it.

If you could stack a 2gbit DRAM die on top of your GPU, or just connect it through a fast wide on-substrate bus, you have 256MB of very fast memory. 256MB equates to 128 bytes per pixel. Let the memory be general purpose and let the developers decide if they want to blow it all on multiple render targets and lots of MSAA (and stereoscopic), or use some of it for textures, fast access for GPGPU or the CPU, etc.

The PS2 proved that edram can be immensely useful.

Cheers
 
People need to get over the edram thing. It's not a good idea, with deffered rendering taking hold. Besides that you need huge, expensive amounts of it to operate at HD (1080P+) resolutions.

I bet you wont see it in any console again ever.

Whoa, when ded EDRAM become a "bad" thing ?
 
What happen if the Nintendo for the WII2 just glue together four flipper to be able to do 1280*720 resolution?

Wouldnt get any multi platform games and repeat Wii hystory, very probably.

BTW they will be publishing RE:Mercenary for 3DS in some parts, that seems that this time they care with third partys.
 
The problem with the edram in the 360 is that: 1.) it's too specialized, it is only good for framebuffer data, and works best with forward renderers which are going out of style as you point out. 2.) There is not enough of it.

If you could stack a 2gbit DRAM die on top of your GPU, or just connect it through a fast wide on-substrate bus, you have 256MB of very fast memory. 256MB equates to 128 bytes per pixel. Let the memory be general purpose and let the developers decide if they want to blow it all on multiple render targets and lots of MSAA (and stereoscopic), or use some of it for textures, fast access for GPGPU or the CPU, etc.

The PS2 proved that edram can be immensely useful.

Cheers

Could the same approach be used in desktop GPU's or is this something that would require knowledge of the underyling architecture to code to?
 
In my opinion most silly thing in 360's EDRAM implementation that it's used as dedicated framebuffer without opportunity write directly to main sys/video memory, such scheme creates additional unnecessary problems with deffered renderers, which is unexceptable for next gen, however obvious solution - just use EDRAM as cache, maybe only for framebuffer, that's it
 
In my opinion most silly thing in 360's EDRAM implementation that it's used as dedicated framebuffer without opportunity write directly to main sys/video memory, such scheme creates additional unnecessary problems with deffered renderers, which is unexceptable for next gen, however obvious solution - just use EDRAM as cache, maybe only for framebuffer, that's it
Xenos could bypass its edram fb by means of MEMEXPORT (not for RMW ops, though). Its efficiency at transferring data in that fashion though was orders of magnitude lower than the path to dedicated fb -> resolve. And that is before considering how that would affect the other users of the host bus.
 
In my opinion most silly thing in 360's EDRAM implementation that it's used as dedicated framebuffer without opportunity write directly to main sys/video memory, such scheme creates additional unnecessary problems with deffered renderers, which is unexceptable for next gen, however obvious solution - just use EDRAM as cache, maybe only for framebuffer, that's it

Xenos could bypass its edram fb by means of MEMEXPORT (not for RMW ops, though). Its efficiency at transferring data in that fashion though was orders of magnitude lower than the path to dedicated fb -> resolve. And that is before considering how that would affect the other users of the host bus.
If you do a 2d pass with no overdraw (deferred lighting, post processing for example), you should get pretty much equal perf by rendering to backbuffer + resolving compared to memexport. It's not slow by any means.

However if you are rendering complex 3d geometry and have high amount of overdraw, then EDRAM surely helps a lot. And that's why it's there in the first place. Why write something to the main memory that gets soon overwritten. EDRAM is a great thing to have. And it's especially a great thing for deferred renderers, since deferred renderers write more data per pixel, and that's where the EDRAM bandwidth helps a lot.

Simple example. 1280x720p screen, 24f8 depth, double 16fx4 g-buffers (pretty common deferred setup), 4 x average scene overdraw (pretty common overdraw factor for complex scenes):

with EDRAM:
1. Render geometry to g-buffers = no main memory BW used (everything stays on chip)
2. Resolve g-buffers to main memory = (4+8+8)*1280*720 = 18.4MB of memory writes

without EDRAM:
1. Render geometry to g-buffers = 4*(4+8+8)*1280*720 = 73.7MB of memory writes. Also the z-buffering needs to read z-values four times for each pixel = 4*4*1280*720 = 14.7MB of memory reads in addition to writes.

With EDRAM the memory traffic was 18.4MB, without it it was 88.4MB. = EDRAM helps deferred rendering a lot.
 
Status
Not open for further replies.
Back
Top