Predict: The Next Generation Console Tech

Status
Not open for further replies.
You do realize that XB360 already has triple core CPU, right?

Yeah, I wasn't suggesting the previous generation didn't have multi-core processors already. I was referencing it as a "standard" or "expectation" that they will have these features at a more integrated level than before, if they have them already. Much like what I said about the GPU.
 
Piracy of games is still pretty much limited to less than 7GB (360 games) so when you get to ps3 games and next gen games its not so easily demonstrated that digital distribution is workable.

It is not limited to less than 7 GB games, sure people might not want download 40GB games, but going over 7GB is no big deal.

And most PS3 games are not much or any bigger than the same game on the 360.
 
It is not limited to less than 7 GB games, sure people might not want download 40GB games, but going over 7GB is no big deal.

And most PS3 games are not much or any bigger than the same game on the 360.

This certainly isn't true!

Even for a lot of multiplatform games... Burnout Paradise is a particular example where the DLC version released later on PSN had to be reduced significantly size before it could be offered on PSN.
 
This certainly isn't true!

Even for a lot of multiplatform games... Burnout Paradise is a particular example where the DLC version released later on PSN had to be reduced significantly size before it could be offered on PSN.

Was there any difference between the PSN and Blu-Ray version of BP?
 
I'm not sure about the actual game in terms of visuals and such, but it did contain alot of the DLC and the code had been altered to the extent that my save file for the disc version wouldn't work at all on the DLC version :devilish:.

That led me to suspect that the DLC version on PSN might well have been a port of the 360 DVD build of the game. But that's just my own suspicion.
 
I'm not sure about the actual game in terms of visuals and such, but it did contain alot of the DLC and the code had been altered to the extent that my save file for the disc version wouldn't work at all on the DLC version :devilish:.

That led me to suspect that the DLC version on PSN might well have been a port of the 360 DVD build of the game. But that's just my own suspicion.

No, just because your save file does not work does not mean anything.

The game on PC is only 3GB (and that is the one that would of used the most space if it needed it).

And how do you know that the game had to be "reduced significantly" when no one has any proof that they did? (other than a drop of need space from not being on a blu ray disc for a slow blu ray drive that needs padding anf the like).
 
No, just because your save file does not work does not mean anything.

The game on PC is only 3GB (and that is the one that would of used the most space if it needed it).

And how do you know that the game had to be "reduced significantly" when no one has any proof that they did? (other than a drop of need space from not being on a blu ray disc for a slow blu ray drive that needs padding anf the like).

Again... i said it was my own suspicion.

Regardless... lets get back on topic shall we ;)
 
Memory issues appear to be the biggest hurdle as well as the best opportunity to differentiate and ensure long legs. AlStrong has pointed out that it based on GDDR5 projections it is very difficult getting 2GB on a 128bit bus and very, very unlikely to see more than 2GB due to the number of dram modules needed. While I am not sold a 256bit being 'off the table" by default maybe there are some potential work arounds.

The first idea is why not a "PC-ish" design: 3 memory sticks for the CPUs and 1GB of fast GDDR5 for the GPU? Memory modules are very, very cheap. If your competitor is looking at possibly 2GB of memory, tops, due to various concerns what better way to gain an edge? 3 x 1GB sticks is very cheap and in 2012 (and looking forward into 2013-2-18) 2GB sticks may be more reasonable for supply purposes. Having dedicted CPUs buses doesn't sound like a bad idea and the potential for 6GB of system memory could be a big differentiator at a small cost. In turn, the GPU could focus on 1GB of very fast video memory. 7GB versus 2GB is a pretty big difference and when talking about streaming, open worlds, and possibly moving to alternative rendering (like voxels) this could be a game changer, even if the aggregate bandwidth is less (but enough where needed).

The other thought is With eDRAM, what is the potential for binning? Why an eDRAM for the complete framebuffer--cannot this be binned? The large cost with tiling appears to be the vertex work needs to be redone. With the setup rate not improving quickly and small polys more common Xenos style tiling doesn't appear to be the smart approach. So why not bin the buffers or have a DRAM buffer adjacent to the eDRAM?

The thought was thus: Lets say you have enough memory for a 256x256 tile (about 0.7MB). As you fill the buffer a copy is sent to a larger (and slower) cache that is able to hold your z-buffer and other various buffers. So instead of re-calculating various buffers while tiling you would essentially re-utilize what you already calculated.

This may need to take some fancy footwork, but if you wanted to utilize eDRAM this could be a way to mitigate one of the primary complaints this generation. And if such a tiling & buffer mechanism could work you could invest in a more robust eDRAM implimentation.
 
Memory issues appear to be the biggest hurdle as well as the best opportunity to differentiate and ensure long legs. AlStrong has pointed out that it based on GDDR5 projections it is very difficult getting 2GB on a 128bit bus and very, very unlikely to see more than 2GB due to the number of dram modules needed. While I am not sold a 256bit being 'off the table" by default maybe there are some potential work arounds.

The first idea is why not a "PC-ish" design: 3 memory sticks for the CPUs and 1GB of fast GDDR5 for the GPU? Memory modules are very, very cheap. If your competitor is looking at possibly 2GB of memory, tops, due to various concerns what better way to gain an edge? 3 x 1GB sticks is very cheap and in 2012 (and looking forward into 2013-2-18) 2GB sticks may be more reasonable for supply purposes. Having dedicted CPUs buses doesn't sound like a bad idea and the potential for 6GB of system memory could be a big differentiator at a small cost. In turn, the GPU could focus on 1GB of very fast video memory. 7GB versus 2GB is a pretty big difference and when talking about streaming, open worlds, and possibly moving to alternative rendering (like voxels) this could be a game changer, even if the aggregate bandwidth is less (but enough where needed).

The other thought is With eDRAM, what is the potential for binning? Why an eDRAM for the complete framebuffer--cannot this be binned? The large cost with tiling appears to be the vertex work needs to be redone. With the setup rate not improving quickly and small polys more common Xenos style tiling doesn't appear to be the smart approach. So why not bin the buffers or have a DRAM buffer adjacent to the eDRAM?

The thought was thus: Lets say you have enough memory for a 256x256 tile (about 0.7MB). As you fill the buffer a copy is sent to a larger (and slower) cache that is able to hold your z-buffer and other various buffers. So instead of re-calculating various buffers while tiling you would essentially re-utilize what you already calculated.

This may need to take some fancy footwork, but if you wanted to utilize eDRAM this could be a way to mitigate one of the primary complaints this generation. And if such a tiling & buffer mechanism could work you could invest in a more robust eDRAM implimentation.
What do you mean by "bin", you mean it in the the way way Intel uses it for Larrabee?
 
I don't have a specific implimentation/method in mind, so "bin" could also be "cached the full buffers" with the eDRAM for a tile of a buffer(s). I guess the idea isn't too disimilar to L1/L2/L3/Main Memory.
 
I don't have a specific implementation/method in mind, so "bin" could also be "cached the full buffers" with the eDRAM for a tile of a buffer(s). I guess the idea isn't too disimilar to L1/L2/L3/Main Memory.
OK I read your post again with this in mind. You idea is to avoid round trip to the main ram and not to take advantage of the bandwidth a chip with built in EDRAM can provide. Edram take less room than SRAM, memory is always welcome but there are process issues.
Overall in the way I perceive your post I think that before deciding if edram is part of the party or not, we should know what will happen to ROP/RBE.
 
I saw this on some sites:

http://jobs.gamasutra.com/jobseekerx/viewjobrss.asp?cjid=20066&accountno=266

Microsoft Games Studios is looking for experienced game developers to work on next generation Xbox platform titles and Windows games for the first-party publishing team. Come join the publishing teams that worked on games such as Jade Empire, Fable, Conker, PGR3, Rise of Nations, Zoo Tycoon, Dungeon Siege, Vanguard, and RalliSport Challenge. Work with our top first-party development partners to help ship a great first-party line up for next generation Xbox platform.

343 Industries, Microsoft's internal Halo studio, is looking for an experienced Level Designer who can design and build levels for our next big project.

Halo 4 on Xbox 720 confirmed?

I'm almost certain Halo "4" (whatever it's called) will release in 2013, since big Halo's release every 3 years and Reach=2010. I find it hard to believe Halo 4 will be on Xbox 360 in fall of 2013. So. it's most likely a 720 game. And by 343 studios, not Bungie.
 
This certainly isn't true!

Even for a lot of multiplatform games... Burnout Paradise is a particular example where the DLC version released later on PSN had to be reduced significantly size before it could be offered on PSN.

Tons of people on the pc downloaded dragon age at almost 16gigs. Wow I think with expansions is over 20 gigs and people download it. I downloaded LOTRO with expansions and my install is over 20 gigs also. Now a days you have console gamers downloading 1-3 gigs for demos and it doesn't seem to matter even arcade games are over 500megs now also.

Its happening on the pc now. Fast foward another 2-3 years and i don't see why it can't happen on the consoles. Esp if internet speeds go up again. Remember since 2005 here in the states we got Fios which brought us 20/5 connections and forst cable companys to match it. Now fios offers 50/20 and there is a rumored bump to 75/30 coming and the 20/5 users will get bumped to 50/20.

So I don't see the problem in the future. Just let people preload the game early and perhaps even allow the games to be played once the download hits 50% after release date that way gamers can start to play as the rest is downloaded.
 
Last edited by a moderator:
Memory issues appear to be the biggest hurdle as well as the best opportunity to differentiate and ensure long legs. AlStrong has pointed out that it based on GDDR5 projections it is very difficult getting 2GB on a 128bit bus and very, very unlikely to see more than 2GB due to the number of dram modules needed. While I am not sold a 256bit being 'off the table" by default maybe there are some potential work arounds.

The first idea is why not a "PC-ish" design: 3 memory sticks for the CPUs and 1GB of fast GDDR5 for the GPU? Memory modules are very, very cheap. If your competitor is looking at possibly 2GB of memory, tops, due to various concerns what better way to gain an edge? 3 x 1GB sticks is very cheap and in 2012 (and looking forward into 2013-2-18) 2GB sticks may be more reasonable for supply purposes. Having dedicted CPUs buses doesn't sound like a bad idea and the potential for 6GB of system memory could be a big differentiator at a small cost. In turn, the GPU could focus on 1GB of very fast video memory. 7GB versus 2GB is a pretty big difference and when talking about streaming, open worlds, and possibly moving to alternative rendering (like voxels) this could be a game changer, even if the aggregate bandwidth is less (but enough where needed).

The other thought is With eDRAM, what is the potential for binning? Why an eDRAM for the complete framebuffer--cannot this be binned? The large cost with tiling appears to be the vertex work needs to be redone. With the setup rate not improving quickly and small polys more common Xenos style tiling doesn't appear to be the smart approach. So why not bin the buffers or have a DRAM buffer adjacent to the eDRAM?

The thought was thus: Lets say you have enough memory for a 256x256 tile (about 0.7MB). As you fill the buffer a copy is sent to a larger (and slower) cache that is able to hold your z-buffer and other various buffers. So instead of re-calculating various buffers while tiling you would essentially re-utilize what you already calculated.

This may need to take some fancy footwork, but if you wanted to utilize eDRAM this could be a way to mitigate one of the primary complaints this generation. And if such a tiling & buffer mechanism could work you could invest in a more robust eDRAM implimentation.

how much more dificult would it be to have a unified 256bit bus vs two 128 bit busses like the ps3 design .

I mean in the ps3 design having a 128 bit bus from the cpu to ram would give you two gigs and having a 128 bit bus from the gpu to ram would give you 2 gigs of ram so a system ram of 4 gigs . You still need a bus from the cpu to gpu also so thats 2 bus systems. I don't know if the cpu to gpu is a 128bit bus also. No idea how that works.

With the 360 you have the cpu to gpu to ram. Why can't you have the gpu to ram at 256bits ?

Also who is to say that GDR 5 is the ram they will use. If this launches in 2011/12 we may be onto GD6 or another type of ram
 
Its happening on the pc now. Fast foward another 2-3 years and i don't see why it can't happen on the consoles. Esp if internet speeds go up again. Remember since 2005 here in the states we got Fios which brought us 20/5 connections and forst cable companys to match it. Now fios offers 50/20 and there is a rumored bump to 75/30 coming and the 20/5 users will get bumped to 50/20.
In the UK, we're not looking 50+mbps connections being widely available until 2015 if we're lucky. We're on 8 mbps copper cabling, with most people getting 2-3 mbps on that, and that isn't going to change any time soon. There's no real national infrastructure policy here, with whatever we have being pure capitalistic ventures. The government only recently made a commitment to 2mbps in every home by 2012!!
 
Lets say your lower threshold for area limit for a 256bit bus is about 180-200mm^2 (seems to be toward the low end, although other factors can be at play that increase the number). That means your GPU can never be smaller than 180mm^2, even after shrinks. 2 x 128bit buses would allow both chips to be reduced in size below such limits. iirc When the Xbox 360 launched Xenon was about 160mm^2 and Xenos about 230mm^2 without daughter die (over 100mm^2 irrc), or there about. I think Cell was in the 230mm^2 range and RSX a little over 300mm^2. On the Xbox it is pretty easy to see why a 256bit bus didn't fit into MS's longterm plans. For Sony XDR was a design component for Cell.

While there are a lot of options, seeing the prices drop out on system memory makes me think that maybe a good compromise based on the current technology, and simple to deploy, would be a large pool of system memory (3x2GB) and then a very fast smaller 1-2GB pool for the GPU. You limit your GPU and CPU fighting for some system resources and while more complex than an UMA you also are giving developers 7GB of memory. While it would be nice to have a bigger video memory, having 6GB of RAM (versus optical disks or HDDs) for game storage could mean fast caching and streaming as well as using the system memory as a virtual memory. Ideal? No. But without knowing if Rambus will hit 500GB/s-1TB/s on XDR2 and release it in time, let alone get GPU/CPU vendors to support it, and some uncertainty of how well eDRAM can be effectively deployed it could remain an option.

Bandwidth is a major concern and developers may grimmace at 60GB/s of system memory bandwidth and 170GB/s for the GPU. But given current hurdles maybe having a ton of memory will be a slightly consoling.
 
Lets say your lower threshold for area limit for a 256bit bus is about 180-200mm^2 (seems to be toward the low end, although other factors can be at play that increase the number). That means your GPU can never be smaller than 180mm^2, even after shrinks. 2 x 128bit buses would allow both chips to be reduced in size below such limits. iirc When the Xbox 360 launched Xenon was about 160mm^2 and Xenos about 230mm^2 without daughter die (over 100mm^2 irrc), or there about. I think Cell was in the 230mm^2 range and RSX a little over 300mm^2. On the Xbox it is pretty easy to see why a 256bit bus didn't fit into MS's longterm plans. For Sony XDR was a design component for Cell.

While there are a lot of options, seeing the prices drop out on system memory makes me think that maybe a good compromise based on the current technology, and simple to deploy, would be a large pool of system memory (3x2GB) and then a very fast smaller 1-2GB pool for the GPU. You limit your GPU and CPU fighting for some system resources and while more complex than an UMA you also are giving developers 7GB of memory. While it would be nice to have a bigger video memory, having 6GB of RAM (versus optical disks or HDDs) for game storage could mean fast caching and streaming as well as using the system memory as a virtual memory. Ideal? No. But without knowing if Rambus will hit 500GB/s-1TB/s on XDR2 and release it in time, let alone get GPU/CPU vendors to support it, and some uncertainty of how well eDRAM can be effectively deployed it could remain an option.

Bandwidth is a major concern and developers may grimmace at 60GB/s of system memory bandwidth and 170GB/s for the GPU. But given current hurdles maybe having a ton of memory will be a slightly consoling.
Even using GDDR5 and a 128bits wide bus bandwidth will be higher, HD5770 have ~75BG/s now.
I don't which kind of process is used for the memory cell, I search and find that in early 2009 samsung was using 50nm custom process. GDDR5 will get faster, it won't fly but it will get faster :LOL:
Even 70GB/s can be fine depending on where happens frame-buffers operations, hard designed around tile base rendering or including EDRAM are pretty much needed otherwise bandwidth constrains will hurt next gen performances a lot (ie I mean you have a point ;) ).
 
I guess the question is how expensive 64megs of edram would be. That could help reduce strain on the bus again.


So does anyone know the price difrence between 128bit bus and 192bit bus and the ram diffrence you can put on it ?
 
Status
Not open for further replies.
Back
Top