Predict: The Next Generation Console Tech

Status
Not open for further replies.
Ok, if I may play Devil's Advocate for a sec: A quick search of single GPUs running Crysis 2 finds that at 1900x1200 (fairly close to 1080) at 'Extreme' preset, only the GTX 580 was able to break 50 fps.

Now it seems to me that people here are of the opinion that Crysis 2 'Extreme' graphics at 1080p60fps will be standard in the next gen. I understand its 2 years out, but that seems a shade optimistic. I'd love to see it.
 
Ok, if I may play Devil's Advocate for a sec: A quick search of single GPUs running Crysis 2 finds that at 1900x1200 (fairly close to 1080) at 'Extreme' preset, only the GTX 580 was able to break 50 fps.

Now it seems to me that people here are of the opinion that Crysis 2 'Extreme' graphics at 1080p60fps will be standard in the next gen. I understand its 2 years out, but that seems a shade optimistic. I'd love to see it.

Go back to November 2005. The Geforce 7800 GTX 512MB is released with unprecedented levels of performance (and TDP). A single year later, a GPU with basically the same performance goes into the PS3, and it actually turns out a small disappointment for the geeks, as people were already onto DX10 and more than a hundred unified shaders.

If the next consoles from Sony and Microsoft are released in 2014, their GPU won't be comparable to a GTX580. I'm damn sure it'll be at least 3x faster, since it'll be at least as fast as the fastest GPU around in late 2012.
 
Go back to November 2005. The Geforce 7800 GTX 512MB is released with unprecedented levels of performance (and TDP). A single year later, a GPU with basically the same performance goes into the PS3, and it actually turns out a small disappointment for the geeks, as people were already onto DX10 and more than a hundred unified shaders.

If the next consoles from Sony and Microsoft are released in 2014, their GPU won't be comparable to a GTX580. I'm damn sure it'll be at least 3x faster, since it'll be at least as fast as the fastest GPU around in late 2012.

Well, the 7800 GTX had twice the local memory (and twice the bandwidth to that local memory) as RSX. Plus a 10% higher clock. That's enough to make a significant difference in performance.
 
I think we should remember the G70/Geforce 7800GTX was "taped out" at the end of 2004.

So if we have next gen consoles in 2014 maybe we should have standard GPU 2012 for these.

Another hypothesis if the same happens with the xbox (2001) and xbox360 (2005) gpus was completed a few months launchment.
 
Last edited by a moderator:
What do you think of this?

Ok just saw a compilation of the rumors, was thinking a couple of things.

Like i commented in a previous post lets say Nintendo wants to send output frames to both the TV and the 6´´ screen at the same time. The supposed integrated camera on the screen could have a less obvious use. That camera could be used to track fingers positioning near the screen surface. Set it like this you won´t need to constantly be focusing attention between the TV and touch screen.

So basically this could turn a TV screen of any size in a touch screen. Some games could use the tradional Wii kind of pointing controls, others could use this direct touch method.

Trying to explain myself better and im a lay man in tech so would be interesting to see what more knowledgeable people think. Lets say the camera can detect the heat signature of fingers just close to the screen even when not in direct contact. The developer could project a non intrusive graphical representation of the fingers in the TV screen, so the user instantly knows the finger relative position to the screen space withouth the need to take the eyes of the TV.

Think the heat detecting camera makes sen because that could make operation independent of light conditions.
 
Last edited by a moderator:
Ok, if I may play Devil's Advocate for a sec: A quick search of single GPUs running Crysis 2 finds that at 1900x1200 (fairly close to 1080) at 'Extreme' preset, only the GTX 580 was able to break 50 fps.

Now it seems to me that people here are of the opinion that Crysis 2 'Extreme' graphics at 1080p60fps will be standard in the next gen. I understand its 2 years out, but that seems a shade optimistic. I'd love to see it.

While I don't really know how power these cards are, I think being a closed environment would benefit the game. For example, let say a GTX 580 was packed into a console, the game probably can run 60fps max since the entire game can be further optimized just for that hardware configuration. With a PC, you run into many different variations in the hardware that it's harder to optimize.
 
Ok, if I may play Devil's Advocate for a sec: A quick search of single GPUs running Crysis 2 finds that at 1900x1200 (fairly close to 1080) at 'Extreme' preset, only the GTX 580 was able to break 50 fps.

Now it seems to me that people here are of the opinion that Crysis 2 'Extreme' graphics at 1080p60fps will be standard in the next gen. I understand its 2 years out, but that seems a shade optimistic. I'd love to see it.

The tech enthusiast response to your concern would be that the new consoles from Sony and MS will be implemented on lithography that is one or even two generations beyond the GTX580, and that this will allow them to match or even surpass the GTX580 while being significantly smaller and cooler running.

The less trivial response would be that the console scenario of having a set target resolution and framerate, and a fixed resource to accomplish it with is very different from benchmarking a PC program. If you take Crysis 2 as an example, it has a ton of settings and techniques, some of which are quite inefficient for what they accomplish, but since they can be turned off if you don't want them, and the game will encounter hardware from the mediocre from yesteryear to the top of the line years in the future, they might as well be made available. Plus they might make a certain sponsor look good in benchmarks, and reviewers will like to use your product at its highest settings because there it will tax the high-end hardware and thus we can pretend that these devices have some feeble justification outside e-penis concerns, and the carousel of hype, articles, ad revenue, and retail sales goes around another turn. Everybody wins, right?
Those extreme settings that reviewers like to use, are typically very far from optimal in terms of visual return on rendering investment. You can experiment with tweaking the settings down - how far down can you go without affecting the visual impression of the game more than minimally? Or go the other way and start from minimum settings and increase them judiciously until you start noticing that responsiveness is affected, or to hit a certain target framerate on your system. Then max the settings and see how much you gained visually with that increase in rendering load, and how much it cost you in responsiveness.
These experiments do not fully mimic the console situation of targeting specific hardware and resolution/frame rate, but it gives an idea of how much can be gained in visual quality/rendering cost by making the right set of compromises. A lot. And on consoles you can work in closer harmony with the particular hardware, and even have a dialog with the art asset creators in how to maximize the result given the peculiarities of the platform, which will produce even better visual results for your rendering effort.

Console games can look surprisingly good given the hardware, because knowledgeable people sat down and made the right compromises. Which is rather the opposite of reviewers benchmarking PC graphics cards with every setting on Max.
 
IF Nintendo's next console is significantly more powerful than the current HD consoles, I hope the designers are finally going to move beyond the 128-bit bus that we've had in other consoles since the PS2/Xbox days. I'd love to see a 256-bit bus in there, as all current AMD GPUs use. I guess though, that since both GameCube and Wii use a mere64-bit bus (right guys?) that a 128-bit bus plus GDDR5 would be a huge upgrade. Combined with a decent amount (20-24 MB) of eDRAM, a 256-bit bus would not be needed. Still....
 
Just saw a rumor on GAF (which is now gone) that David Perry said that the main feature of Project Cafe can be turned off and it increases the available processing power. Hmmm.
 
Last edited by a moderator:
I think we should remember the G70/Geforce 7800GTX was "taped out" at the end of 2004.

So if we have next gen consoles in 2014 maybe we should have standard GPU 2012 for these.

Another hypothesis if the same happens with the xbox (2001) and xbox360 (2005) gpus was completed a few months launchment.


That makes me think that I hope Sony & Nvidia shoot for a custom Maxwell-generation GPU for PS4, rather than Kepler.

Kepler is likely to be to Fermi what GTX 280 was to G80 and what G70 was to NV40.

It's sad that PS3 came out later than Xbox 360 using an older GPU architecture than Xenos. I hope this does not repeat with PS4.
 
That makes me think that I hope Sony & Nvidia shoot for a custom Maxwell-generation GPU for PS4, rather than Kepler.

Kepler is likely to be to Fermi what GTX 280 was to G80 and what G70 was to NV40.

It's sad that PS3 came out later than Xbox 360 using an older GPU architecture than Xenos. I hope this does not repeat with PS4.
It won't repeat. I guarantee Sony won't use an older GPU architecture than Xenos. ;)
 
That makes me think that I hope Sony & Nvidia shoot for a custom Maxwell-generation GPU for PS4, rather than Kepler.

Kepler is likely to be to Fermi what GTX 280 was to G80 and what G70 was to NV40.

It's sad that PS3 came out later than Xbox 360 using an older GPU architecture than Xenos. I hope this does not repeat with PS4.

Indeed. Something like Maxwell (or even more) will be the best solution for a 6 year console cycle.

That's why I particularly wish to repeat with Nvidia in ps4 gpu model development was adopted with xbox in 2001 and not a plan "B on the shelf last minute"(Vizualizer/Cell Gpu was the plan "A"...) that was the RSX.
 
Last edited by a moderator:
It won't repeat. I guarantee Sony won't use an older GPU architecture than Xenos. ;)


Sony won't use an older GPU architecture than Xenos/C1/R-500 or Xenos2/Fusion II/Krishna derived ( or "X720" gpu whateaver) ;)?
 
Last edited by a moderator:
Will next-gen consoles use unified or split memory? What are the advantages and disadvantages between the two? Which one seems more suitable for next generation?
 
Will next-gen consoles use unified or split memory? What are the advantages and disadvantages between the two? Which one seems more suitable for next generation?

If Wii 2 uses a Fusion APU, unified it is, unless AMD makes a special version with embedded on die ZRAM, but that is doubtful. Here's hoping for 2 GB of GDDR5!
 
Will next-gen consoles use unified or split memory? What are the advantages and disadvantages between the two? Which one seems more suitable for next generation?
If you've a fast enough bus, unified. If you aren't going to pay for that, you'll either need split RAM or eDRAM. eDRAM looks unlikely to me at the moment - I imagine they're gunning for unified (ignoring something like GC A-RAM storage).
 
Will next-gen consoles use unified or split memory? What are the advantages and disadvantages between the two? Which one seems more suitable for next generation?
Unified is cheaper to implement. It simplifies life if both CPU and GPU need to access the same memory locations (given a reasonably cheap implementation).

Split memory has the advantages that the CPU and GPU won't step on each others toes when accessing main memory (one has to wait for the other). Particularly if they have different access patterns (one streams larger chunks, and the other wants more frequent, more random access). Either the streamer keeps getting interrupted or blocked, or the more frequent "random" access process gets awful latency, depending on how you handle the problem. Very large cache for instance can alleviate the problem, but not remove it. And having split pools could allow you to optimize memory sizes for respective tasks, as well as optimize for latency vs. bandwidth for the different pools.

It really depends on cost and to some extent what you are trying to do.
 
Last edited by a moderator:
Here's the wildest of the wild guesses for Wii2:

- An off-the-shelf Llano APU with 4 cores @ ~2.5GHz + 400sp @ 600MHz and a separate 400sp Redwood @ 600MHz (~HD5570) with dedicated 128bit GDDR3 memory.

The IGP in the APU renders for streaming into the controller's screen, the dedicated GPU (with higher available bandwidth) renders for the FullHD TV.
Turn off the controller screen gimmick (or reduce its output to simple in-game info), and both GPUs can Crossfire, doing ~1.9x the performance for the main screen and achieving Juniper-class performance.

I bet Nintendo could get those for dirt cheap in 2012.
 
Here's the wildest of the wild guesses for Wii2:

- An off-the-shelf Llano APU with 4 cores @ ~2.5GHz + 400sp @ 600MHz and a separate 400sp Redwood @ 600MHz (~HD5570) with dedicated 128bit GDDR3 memory.

The IGP in the APU renders for streaming into the controller's screen, the dedicated GPU (with higher available bandwidth) renders for the FullHD TV.
Turn off the controller screen gimmick (or reduce its output to simple in-game info), and both GPUs can Crossfire, doing ~1.9x the performance for the main screen and achieving Juniper-class performance.

I bet Nintendo could get those for dirt cheap in 2012.
Possible for emulation to work with older Wii titles as Dolphin does a decent job (considering it has to be reverse engineered), but still that's a hell of a long shot.
 
If you've a fast enough bus, unified. If you aren't going to pay for that, you'll either need split RAM or eDRAM. eDRAM looks unlikely to me at the moment - I imagine they're gunning for unified (ignoring something like GC A-RAM storage).

Why is EDRAM a bad choice? From everything in this thread, a single pool would require a bus that would be expensive and very difficult to shrink.
 
Status
Not open for further replies.
Back
Top