Predict: The Next Generation Console Tech

Status
Not open for further replies.
How can a 8800 series PC make it into a next generation console?

The 8800 series is not out yet and doesn't it take months to finalise a console and get working samples out?

Surely they had to lock down the GPU specs months ago in order to start making prototypes and bug testing the hardware?
 
The 8 series AMD gpus have probably been in development for around two years so they could have easily have started to adapt them for use in a console some time ago.

Remember that the time to market for a gpu is measured in years,usually two, a card that is released today probably has had a replacement in development for at least a year the day it hits the shelves.

That gives you plenty of time to get next gen cards into a console released in a years time.
 
The 8 series AMD gpus have probably been in development for around two years so they could have easily have started to adapt them for use in a console some time ago.

Remember that the time to market for a gpu is measured in years,usually two, a card that is released today probably has had a replacement in development for at least a year the day it hits the shelves.

That gives you plenty of time to get next gen cards into a console released in a years time.

Absolutely, I don,t think people realize how long it takes to make a new chip. If anything AMD, is probably already working on the 9xxx/10xxx series.

From my experience in big semiconductor companies, there are multiple teams that work on a product. Usually a high level architecture team comes up with the design 2-4 years before product, they pass the spec/design to lower level architecture teams that may write accurate performance models of that design. Then the RTL gets written and synthesized. Initial tape out and then validation gets done by yet another team. All of the work is done by different teams with multiple designs concurrent in the pipeline.

I bet the 8xxx series feature and architecture set was finished and finalized last year, probably concurrent with the next gen console designs.
 
Durango won't have 1TF GPU, but if it had, I wouldn't expect CPU to compensate it. MS is clear on platform architecture and ease of development, Ballmer would literally chop of couple heads if someone came with that proposition. I know they barely went with multi core CPU this gen, since even that was considered complicated.

Maybe we should start discussing something else instead of Durango GPU which we know nothing about. BG stated several times that "1TF+" was vague guesstimate from guy who is not even his source, for a GPU that is not even finished. People than went on with "1.1-1.5" number without anything to back it up.

Based on assumption that they went with Jaguar, just like Sony went from Steamroller to Jaguar, I would expect that developers wanted more GPU grunt. And if those shots are legit Alpha Kits (Iherre said developers got them in Nov 11') than GPU won't be even close to 1TFLOP.
 
$599, 1 million troops, 1.21 gigawatts, acerts known as acert, deal with it, duct tape, finfets everywhere, flops capacitor, giant enemy crabs, i want to believe, impossibru, it belongs in a museum, liquid cooling, ludicrous speed, microsoft-sony.com, nothing but bits, over 9000, shifty gonna get you!, subscriptions everywhere

Maybe we can point this creativity towards uncovering next gen mysteries?

Personally I vote for the expected, unexpected. It'll probably be technology which is familiar but put together in interesting or unique and unexpected ways. If we consider the Wii U and the use of the io processor and DSP or the 360 with ED-RAM, perhaps what is most interesting isn't the number of flops but the unique console-esque engineering solutions.

How do you make something more than the sum of its parts?
 
The 8 series AMD gpus have probably been in development for around two years so they could have easily have started to adapt them for use in a console some time ago.

Remember that the time to market for a gpu is measured in years,usually two, a card that is released today probably has had a replacement in development for at least a year the day it hits the shelves.

That gives you plenty of time to get next gen cards into a console released in a years time.

I don't think it's anywhere near the time, they spend months bug testing the architectures before the chips is even classed as being ready.

With the move to GCN AMD had to make a fresh start from the ground up and I seriously doubt they had the 8800 series in production at the same time or within the same time frame of the concept of GCN.

I still think you're talking mid-low range 7000 series GPU.

Even a 7770 is a good card to use, on the latest drivers it works like this

AMD 7770 = AMD 5850
AMD 5850 = GTX 285 OC
GTX 285 OC = 9800GX2
9800GX2 = 2x 9800GTX
9800GTX = ~8800 Ultra
8800Ultra = 1900XT Crossfire
1900XT Crossfire = 2x Xenos

360 has 1x Xenos, so as you can see it's going to be a monster upgrade with just a low-mid-range GPU.
 
360 has 1x Xenos, so as you can see it's going to be a monster upgrade with just a low-mid-range GPU.

Although a HD 7770 might probably be a noticeable upgrade to the hardware in current consoles, how could a HD 7770 be called a "monster upgrade" (as you called it), if, at least according to the following presentation over there for example:

unrealengine.com/files/misc/The_Technology_Behind_the_Elemental_Demo_16x9_(2).pdf said:
http://www.unrealengine.com/files/misc/The_Technology_Behind_the_Elemental_Demo_16x9_(2).pdf

Elemental demo
  • GDC 2012 demo behind closed doors
  • Demonstrate and drive development of Unreal® Engine 4
  • NVIDIA® Kepler GK104 (GTX 680)
  • Direct3D® 11
  • No preprocessing
  • Real-time
    • 30 fps
    • FXAA
    • 1080p at 90%

apparently not even a GTX 680 appears to be be able to manage full 1080p at 30 fps there :???:?
 
7770 would be a "monster upgrade" on current (heck just look to transistor count, 1.5 billion vs 300 million for current GPU, not too mention near double clocks on top), but rather disappointing in the overall scheme of things.

Pretty obvious concept.

But "Rangers" apparently wrote the following:

I think Ruskie is trying to say it wont be closer to 1TF because it will be way more?
 
7770 would be a "monster upgrade" on current (heck just look to transistor count, 1.5 billion vs 300 million for current GPU, not too mention near double clocks on top), but rather disappointing in the overall scheme of things.

Pretty obvious concept.

If it really will take one more year (end of 2013 or whatever) or maybe even longer, then anything below HD 7970 / GTX 680 performance/specs or maybe even anything below HD 8970 / HD 9970 or GTX 780 / GTX 880 (or whatever they are going to be called) perfomance/specs would be disappointing, wouldn't it :mrgreen::p:D;)?

I think Ruskie is trying to say it wont be closer to 1TF because it will be way more?

You might have a point there ;).
 
how could a HD 7770 be called a "monster upgrade

So are you saying that going from a 1900XT to a 7770 would not be a monster upgrade?

You're talking at least a 6-7x the jump in performance, maybe more and the added benefit of DX11.

I really think you guys are getting over excited and over generous with the hardware.

I don't think Microsoft or Sony want to take a big loss, if any on the next set of consoles like they did in the previous generations so I think these uber high end hardware is out of the window for that reason alone.

People are expecting too much.
 
So are you saying that going from a 1900XT to a 7770 would not be a monster upgrade?
A lot of hardware today is much faster than what's in current consoles, isn't it? It doesn't even need a HD 7770 to be quite a bit faster than current consoles, doesn't it? So that alone is not necessarily anything special?

But this thread is about "next-gen", isn't it?

And if, at least according to that presentation mentioned above, not even a GTX 680 appears to be able to manage full 1080p at 30 fps in a demo of a "next-gen" engine, then how much "next-gen" would a HD 7770 really be?

Especially at the end of 2013 or maybe even later when GTX 780 / GTX 880 and HD 8970 / HD 9970 (or whatever they are going to be called) might already be around?

And considering that the demo in question does not even necessarily appear to be looking THAT impressive, anything below GTX 680 performance/specs would be quite disappointing, wouldn't it?
 
Last edited by a moderator:
But if said console had the equivalent of a CPU, a Physix Processor, and a GPU working efficiently together, then the GPU wouldn't have to do a lot of stuff it wasn't designed for because latency between CPU and GPU is too big, leaving the CPU bored to death.

In that scenario, the GPU could be more lightweight.

Who knows? Just speculating here ...
 
But if said console had the equivalent of a CPU, a Physix Processor, and a GPU working efficiently together, then the GPU wouldn't have to do a lot of stuff it wasn't designed for because latency between CPU and GPU is too big, leaving the CPU bored to death.

In that scenario, the GPU could be more lightweight.

Who knows? Just speculating here ...

Well, regarding CPU, see for example (especially the highlighted (bolded/underlined) part of the following quote):

Digital Foundry said:
http://www.eurogamer.net/articles/digitalfoundry-vs-unreal-engine-4

Digital Foundry vs. Unreal Engine 4

[...]

So if this is a tech demo, just how much of it will we see in actual next-gen titles? The UE4 demo is running on PC, specifically an Intel Core i7 processor with an NVIDIA GTX680 and 16GB of RAM - what Epic terms a standard development box. This is almost certainly considerably beyond the base hardware of both Orbis and Durango, but factoring in the advantages of a fixed hardware platform with dedicated APIs, the gap narrows.

"Obviously we don't know what the final specs are for the next-generation consoles and I'm sure we'll have to make trade-offs to put a final quality game onto whatever comes out," says Alan Willard.

"We have a pretty good history of making our tech demos look like what our final games are. Gears started off as a tech demo years ago at E3 in 2004 or so. We certainly don't try to fake what we're capable of doing. Obviously the engine is very new, we're still exploring what we can do with it and as more details come out on what the next generation hardware is, we'll have better ideas on what our final trade-offs will be. We're still waiting to find out ourselves."

We can't help but feel that Epic is perhaps playing with us just a little here. Bearing in mind the realities of modern GPU design (they can take years to architect and get into production) and the projected Q4 2013 release dates, Orbis and Durango are almost certainly in the final phases of development. As a major stakeholder in the games business via its successful middleware business, and factoring in the company's previous input into the design of the Xbox 360, Epic must surely possess a rather good grasp of what these machines are capable of. This perhaps makes the UE4 demo even more exciting: what we're seeing here is its vision of the fundamental building blocks that will underpin a whole generation of next-gen titles.


;)
 
Next generation will be defined by what MS and Sony put in their boxes, not by Epic or anyone else's demos.
Next gen engines will be scaled to run on those boxes, or Epic won't be selling very many copies.

The days where consoles get the ultra high end PC parts is over IMO, power constraints dictate that even if costs don't.

As an aside, I could imagine a scenario where one console had a clear 2x advantage in performance, and still couldn't differentiate itself significantly from the competition.
 
Next generation will be defined by what MS and Sony put in their boxes, not by Epic or anyone else's demos.

See for example:

develop-online.net/features/1462/Epic-Games-next-gen-manifesto said:
http://www.develop-online.net/features/1462/Epic-Games-next-gen-manifesto

Epic Games' next-gen manifesto

Company president Mike Capps discusses how future technology fits Epic's masterplan

[...]

Do you want to engage with the console manufacturers before their new systems come out? Do you want to influence them on hardware specifications?

That’s absolutely our plan. I can’t say much more than that. Okay, let’s say, a year ago that was our plan, and I can’t tell you whether we’ve done it or not yet.

Our Samaritan concept, if you look at PC hardware in two or four years’ time, is something that the next consoles can achieve. It was just that no one knew what a next-generation game would look like – so that was our idea, to show people what we can achieve.

I mean, The Samaritan is a real-time demo that looks like an animated movie from about five years ago – the tech is getting that sophisticated. So our goal was to show off some of the technologies we would like to see on the next-gen platforms, and also to have The Samaritan as the benchmark. We believe what we’ve demonstrated is achievable at a reasonable development cost, so it’s what gamers should be demanding for next generation.

We’ve shown that demonstration to about sixteen hardware manufacturers. Not just the console guys, but companies like Nvidia who give us feedback about how to deliver the tech more efficiently. I mean, that’s the idea, a demonstration to start pushing everything forward.

[...]


:mrgreen:;)
 
Maybe we can point this creativity towards uncovering next gen mysteries?

So, do we think the GPU is going to be made of billions of "giant enemy crabs"? Though if there are billions of them I would assmue home storage may be an issue... maybe it going to be some cloud gaming system...? :LOL:
 
Well this is the next generation thread on b3d, I'm close to think that ubber focus on paperflops is as close to trolling as can be.
It only says part of the story and luckily for company like Nvidia still vouch as the leader in the high performance 3d cards otherwise they would not have sold a damned cards (or at a big loss) since the introduction of evergreen till their release of Kepler...
There is also the evolution of the PC market wrt to graphic card in the PC realm and the fact that they hit the power wall. Why care, consoles makers have to beat or match whatever the PC are pushing on every accounts, or more on paperflops alone.
Not too mention that this reborn one sided focus is kind of correlate to rumors that may have a system pushing a bit more paperflops than the other.
I remember writing a post about how the different FLOPS compare in AMD architecture (between Vliw5, vliw 4 and scalar architecture), it took me time but it was lost time it seems. Mostly you wold have to make a (gross) comparison between those arch to multiply (in that order) by 3.8/5 , 3.8/4 and 1.
I also posted links to hardware.fr reviews because they consistently (they've been doing it for a long time) results for theoretical measurements (geometry, texturing, fillrate, fillrate with blending). Those measurements shows that paper FLOPs are not the most relevant metric to compare GPU (think of Nvidia again...) (that's if you would want to compare GPUs on a single metric which you would not want). Having read their reviews for a long time, the most relevant data is fillrate with blending as it worked for many competing generations of architectures from both manufacturers.

Link ignored I guess anything that doesn't read in TFLOPS is irrelevant. To me it's a sad premise of the shit blizzard to hit the web in 2013 to paraphrase "Park Boys".
Or in a more classy way, Mark Twain:
History doesn't repeat it-self but it does rhyme.
--------
To Acert93 I think there might be a significant difference between Xenos and RSX, not in theoretical numbers but in sustained fillrate and fillrate with blending. RSX has indeed strong(er) points, more pixel shading raw power, more texturing power as well as support for some shadow filtering.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top