Predict: The Next Generation Console Tech

Status
Not open for further replies.
7970 (presumably too powerful)

How could it be too powerful :cool:, especially if the release of the actual console itself would probably be quite a bit later ;)?

What do you mean with "powerful"? Are you referring to the power comsumption or are you referring to the performance?
 
Probably both.

Well, power consumption might actually be an understandable argument, but how can a GPU have too much performance :???:?

More performance would be more "future-proof", wouldn't it ;)?

Also, if you take a look at the following presentation for example (especially see the highlighted (bolded/underlined) parts of the quote):

unrealengine.com/files/misc/The_Technology_Behind_the_Elemental_Demo_16x9_(2).pdf said:
http://www.unrealengine.com/files/misc/The_Technology_Behind_the_Elemental_Demo_16x9_(2).pdf

Elemental demo
  • GDC 2012 demo behind closed doors
  • Demonstrate and drive development of Unreal® Engine 4
  • NVIDIA® Kepler GK104 (GTX 680)
  • Direct3D® 11
  • No preprocessing
  • Real-time
    • 30 fps
    • FXAA
    • 1080p at 90%


it rather looks like as if even a GTX 680 apparently would not even be able to manage full 1080p at 30 fps there :???:?

Then how would a HD 6870 or HD 6950 cope with it :???:?
 
Last edited by a moderator:
Then how would a HD 6870 or HD 6950 cope with it ?

Optimization.

Anyways I guess most people already noticed this but to make it official:

6870=2 TF
6950=2.25 TF (they're surprisingly close due to high clocks of 6870)

Doesn't directly fit with "1TF+" for what it's worth.

I wonder if they even could have downclocked to meet whatever TF spec they wanted? Just thoughts.
 
I haven't seen target specs, but if I was to guess, I would say they put "1+ TFLOP" so that the accurate target doesn't leak.

Putting R6870 or R6950 in alpha dev kit that was dispatched early this year would mean that they took cards that would give developers idea of GPU they would get in retail unit, and those GPUs year ago were as powerful as you can get so I wouldn't put emphasis on exact performance based on the chips in dev kit.

It should be also noted that those cards are manufactured at 40nm process. Real silicon should be 28nm.
 
Most people buying a console are buying a years-out-of-date GPU. And the final part could be a modern architecture, with the current devkit part just being selected for equivalent performance but not features. Considering some of the rumours regarding targeting a lower performance bracket, a decent GPU is satisfying. Anyone wanting more should be gaming on PC.

Dunno, consoles get a significant advantage with draw calls, on top of games targetting the hardware. If there's a very high bandwith side memory attached to the GPU then we could have an incredible amount of foliage and litter on the consoles with no concern for the costs.

Side memory (short handle for silicon interposer or whatever "hypercube" marketing term) would be similar to what the PS2 had with edram and crazy fillrate, but less constrained by low res textures and composite output.

PC would have to wait for GPU including side memory, or a successor to gddr5, or a good enough level of commonly available GPGPU features. Use of GPGPU for graphics would be another strong point of the consoles (or the strong point if no magical memory is present), instead of writing for a generic DirectCompute you can write more efficient, more flexible software because it's written to a fixed GPU. No least common denominator made to run on various nvidia, AMD or even Intel architectures.
 
I haven't seen target specs, but if I was to guess, I would say they put "1+ TFLOP" so that the accurate target doesn't leak.

Putting R6870 or R6950 in alpha dev kit that was dispatched early this year would mean that they took cards that would give developers idea of GPU they would get in retail unit, and those GPUs year ago were as powerful as you can get so I wouldn't put emphasis on exact performance based on the chips in dev kit.

It should be also noted that those cards are manufactured at 40nm process. Real silicon should be 28nm.

From what BG says there arent really any paper GPU target specs for Durango. Just like there arent for Wii U (but there are for PS4). And in both cases we seem to be having a really hard time getting GPU info...

So any figures is presumably a developer approximation?

Also, 6970 was more powerful though. If they were limited to 6k series and wanted the most powerful single card they could get, 6970 was it. Or for that matter if they were really going big, Crossfire.
 
From what BG says there arent really any paper GPU target specs for Durango. Just like there arent for Wii U (but there are for PS4). And in both cases we seem to be having a really hard time getting GPU info...

So any figures is presumably a developer approximation?

Also, 6970 was more powerful though. If they were limited to 6k series and wanted the most powerful single card they could get, 6970 was it. Or for that matter if they were really going big, Crossfire.
Actually, BG said his source told another guys guesstimate for GPU performance, so its very vague.
 
More performance would be more "future-proof", wouldn't it ;)?
You don't want to give developers more power in the dev kits than they have in the final console as that means they could end up using techniques that run too slow on final hardware and need to be refactored. If the development hardware runs the same speed or slower, no changes will have to be made.

Dunno, consoles get a significant advantage with draw calls, on top of games targetting the hardware. If there's a very high bandwith side memory attached to the GPU then we could have an incredible amount of foliage and litter on the consoles with no concern for the costs.
If you want the latest GPU (user542745831's complaint), you need a PC. ;)
 
You don't want to give developers more power in the dev kits than they have in the final console as that means they could end up using techniques that run too slow on final hardware and need to be refactored. If the development hardware runs the same speed or slower, no changes will have to be made.

What was meant was this:

If development would be going on on a HD 6870 or HD 6950 and that kind of GPU would end up in a 2013 console for example, then that would mean that this GPU would be around three years old by then, wouldn't it :???:?

A GPU that would already be three years old for a product that would probably be supposed to last several years again :???:?

If you want the latest GPU (user542745831's complaint), you need a PC. ;)

Would it be so far fetched to expect present technology for a new console :???:?
 
In early 2004 (same time this dev kits came out) Xenon (360 dev kit codename) was G5 Power Mac with Ati 9800 Pro (later replaced with x800). Those cards are not even comparable to the thing retail was to be equipped with. But, at that time there was no better option so I guess at the time MS put together dev kits, R6950-6870 was the closest to what they are targeting.

Both of those cards are 2TFLOP + and both of them are 40nm (meaning bigger die size, more heat and higher TDP) so I guess we are getting either something better than those two or at least comparable if MS thinks they don't need anymore power and they would be better with savings duo to 28nm manufacturing process.
 
What was meant was this:

If development would be going on on a HD 6870 or HD 6950 and that kind of GPU would end up in a 2013 console for example, then that would mean that this GPU would be around three years old by then, wouldn't it :???:?
Non final hardware devkits are only indicative. As others say, look at what was in the XB360 devkits prior to the final Xenos silicon. A 6870 in the devkit doesn't mean that GPU will feature in the final console, nor that the final console GPU will be old tech (although it might be).

A GPU that would already be three years old for a product that would probably be supposed to last several years again :???:?

Would it be so far fetched to expect present technology for a new console :???:?
More modern tech means a higher price. As I've mentioned before, this gen could be executed as starting in 2011 with $600 consoles and bleeding edge tech, which drop in price to a $400 console in 2013, only without the hardware actually being released until 2013. If the hardware had been released in 2011, it would be modern, but people buying in 2013+ would be getting old hardware, yet still be happy with it. There's only 12 months or so of actually having the latest, greatest hardware in a console, and the rest of the time it's outdated, so there's no real reason to go with just the latest, greatest hardware other than for 12 months of bragging rights (which are rendered mute against a PC). You have to balance out the potential gains against the potential losses and pick the appropriate hardware. A GPU that's a few years old and shrunk small and cheap and cool and suitably capable isn't a bad choice in itself.
 
so there's no real reason to go with just the latest, greatest hardware other than for 12 months of bragging rights

How can you say something like that, especially considering how long console cycles apparently are these days :???:?

Those cycles (unfortunately?) might be much longer than just 12 months :???:?
 
So, then what are the odds for HD 8970 or HD 9970 (or what ever they are going to be called) :cool:?

:mrgreen:;)

I believe it will be radeon 8000 series, which is just a refresh but allows better integration between CPU and GPU. I believe the Steamroller APU (Kaveri) uses this tech. A small and significant update that is worth it enough to be included hopefully.
 
So, then what are the odds for HD 8970 or HD 9970 (or what ever they are going to be called) :cool:?

I'd say pretty close to zero. I'm guessing the performance of the final silicon is going to be at least 6870-6950 or slightly faster, but with clearly lower power consumption. I think it's going to be 7850/8770/8850 or something like that. 200W GPU is not going to happen.
 
The first Samaritan demo ran on 3x Nvidia 580 GTX.

Crytek desires 8gigs of RAM and 4x Nvidia 590 performance.


So performance of a Nvidia Maxwell GPU is what I expect and it fits nicely with Crytek and Epics wishes. I'd be shocked by a GPU that is 2 teraflops or less.
 
I'd say pretty close to zero. I'm guessing the performance of the final silicon is going to be around 6870-6950 or slightly faster, but with clearly lower power consumption. I think it's going to be 7850/8770/8850 or something like that. 200W GPU is not going to happen.

A "hardcore" console coming out at the end of 2013 for example, containing a GPU with the specs and/or performance of a HD 6870 or HD 6950 would be kind of "ridiculous", wouldn't it?

Just think about it:

Two years after the release of such a console for example, the GPU in it would already be around five years old :???:?

Is anyone really thinking that would be "next-gen" worthy?

Something like a HD 8970 or HD 9970 (or what ever they are going to be called) would probably be more appropriate for that at that time, wouldn't it :D;)?

Also, just an example:

Is anyone here REALLY impressed by the visuals of the "Elemental Cinematic" demo video for example?

It's not necessarily THAT impressive looking, isn't it?

Now, if not even a GTX 680 appears to be able to manage full 1080p at 30 fps there :???: (at least according to that SIGGRAPH 2012 presentation), then anything less than a GTX 680 for "next-gen" consoles would be kind of "ridiculous", wouldn't it :eek:?

That's assuming you are expecting a really big leap from "next-gen" consoles of course :???:.
 
Last edited by a moderator:
How can you say something like that, especially considering how long console cycles apparently are these days :???:?
There has been one long-cycle generation. Next gen might see a refresh in 3 years for all we know.

A "hardcore" console coming out at the end of 2013 for example, containing a GPU with the specs and/or performance of a HD 6870 or HD 6950 would be kind of "ridiculous", wouldn't it?
How do you know MS are targeting the 'hardcore'? Maybe they'll leave the PC to target the hardcore and have their console as the family machine, where cost is more important?

Just think about it:

Two years after the release of such a console for example, the GPU in it would already be around five years old :???:?
Five year old GPUs can sell just fine. No-one buys a console based on it's innards, but what they see on screen. Most console gamers will see marked improvement from an old GPU.

Something like a HD 8970 or HD 9970 (or what ever they are going to be called) would probably be more appropriate for that at that time, wouldn't it :D;)?
It's all about business decisions, for which there is a proper thread for this discussion. That's why early rumours of low power parts were a valid option and a concern for plenty of core console gamers. Now it's looking like the machine won't be as low-end as that, which has people breathing a sigh of relief. The notion of very expensive loss-leading consoles running the latest, high-end hardware is looking increasingly unsustainable. There are lots of other business models and opportunities to be sought, so it's wrong to expect the consoles to launch with the latest tech. Maybe they will, maybe they won't, but the attempts to predict the next-gen consoles can't rely on that assumption.
 
Status
Not open for further replies.
Back
Top