Predict: The Next Generation Console Tech

Status
Not open for further replies.
How expensive will DDR3/DDR4 be 3-4 years from now, though? The long term cost might be in favor of DDR4.

What's stopping them from moving to DDR4 with a future shrink in 2015-16? Change the memory controller (AMD will also have some experience with DDR4 memory controllers by this point) and use DDR4 with the same speeds and latency. If they use high speed (2GHz+) DDR3 then DDR4 at those (low) speeds will be available for cheap well before DDR3 starts to become expensive, so the crossover point will happen fairly early - maybe 2015, possibly in time for their shrink to 20nm?

Is memory performance that hard to match using different technologies even given the same clocks and latency? If it does perform better at some operations, within the same range for others and sometimes lower for some, could they not artifically limit it? Besides, I kind of doubt performance would change more than 1-3% percent at most myself.
 
So Durango has a hardware blitter? Starting to sound like a Nvidia design. The guy that designed the Atari Jaguar2 Blitter works at Nvidia. Maybe people are confusing the Atari Jaguar and AMD Jaguar?
 
Cool. Kinda sounds like a Crossfire set up and would explain the uniqueness of the design being mentioned rather consistently over time. If that's the case it's more believable, though for me it also goes back to the AMD Flops vs nVidia Flops debate and that still wouldn't be close to a 680 if the main GPU is comparable to an 8000-series 7770 equivalent. Should surpass the PS4 though based on what's known so far. Only time will tell how much if this is the case.

Yeah, it should be interesting.

BTW bg did you ever have numbers for the Wii U's GPU? I saw rumours of 500 GFLOPS but that seems to high?
 
While I realize my knowledge is rudimentary at best, I fail to see how a DSP for sound is going to substantially improve the rendering of games on the Durango. A bit more efficient, sure. Enough to see a substantial visual difference...?

I have something of a problem with the special sauce/pseudo gpu idea. If the SRAM is not considered part of the 3... Have any of the ideas put forth taken into account that MS's history so far has been developer friendly? Someone above compared the ring-bus/blitter to Cell. Cell was far from loved by developers and MS is more dependent on 3rd party support than Sony.

I fail to see any idea presented which could make a 1.3tf GPU outperform a 3tf (even in something as vague as IQ) much less one which would be non-exotic/difficult to use. I must be missing something.
 
Yeah, it should be interesting.

BTW bg did you ever have numbers for the Wii U's GPU? I saw rumours of 500 GFLOPS but that seems to high?

Nah. Once it got to a point where I got about all the concrete info I could get, I gave up since Nintendo wasn't revealing certain things to anyone. It was interesting to see that I estimated clocks based on the DSP clock and the GPU was higher than my 480Mhz guess and below my lowest CPU clock guess of 1440.

Back to Xbox 3, I did a search and 7770s in Crossfire were comparable in performance to 6950/6970. Which was heavily guessed at in the PC kit. Just a neat coincidence.

No chance of crossfire in Durango.

Oh I wasn't saying it was Crossfire. I said it sounded like Crossfire since in that scenario it said multiple "GPUs" used in tandem to increase graphical performance.
 
While I realize my knowledge is rudimentary at best, I fail to see how a DSP for sound is going to substantially improve the rendering of games on the Durango. A bit more efficient, sure. Enough to see a substantial visual difference...?

I have something of a problem with the special sauce/pseudo gpu idea. If the SRAM is not considered part of the 3... Have any of the ideas put forth taken into account that MS's history so far has been developer friendly? Someone above compared the ring-bus/blitter to Cell. Cell was far from loved by developers and MS is more dependent on 3rd party support than Sony.

I fail to see any idea presented which could make a 1.3tf GPU outperform a 3tf (even in something as vague as IQ) much less one which would be non-exotic/difficult to use. I must be missing something.

Even if this "secret sauce" exists, I question whether it can universally be applied, what it's limitations are and what burden it places on developers for its use. Are you forced to use ray tracing? Does it only work with forward renderers, or maybe it's limited to deferred rendering? Is it like the 3DS where you get a few hard coded shader effects, but will that mean the Durango gets left behind if Orbis devs start creating more naturalistic or artistically designed shaders that don't fit the fixed function's expected parameters? Will every Durango game have the same "Durango" look? Won't it be a hassle for programmers who made their displeasure at needing to manage hetergeneus cores on the PS3 known far and wide.
 
Nah. Once it got to a point where I got about all the concrete info I could get, I gave up since Nintendo wasn't revealing certain things to anyone. It was interesting to see that I estimated clocks based on the DSP clock and the GPU was higher than my 480Mhz guess and below my lowest CPU clock guess of 1440.

Back to Xbox 3, I did a search and 7770s in Crossfire were comparable in performance to 6950/6970. Which was heavily guessed at in the PC kit. Just a neat coincidence.



Oh I wasn't saying it was Crossfire. I said it sounded like Crossfire since in that scenario it said multiple "GPUs" used in tandem to increase graphical performance.

Remember pastebin?

You may have heard of the two GPU rumor already. Think its a load of crap? Think again. That's the road that MS is going down. But I should say, its not your conventional dual GPU set up that you may find in some PC's. MS and AMD have been working on something for quite a while, and if everything works out, expect this thing to blow you away when you first see it. Similar to Xenos from the beginning of this gen, its a pretty innovative chip

xD
 
Even if this "secret sauce" exists, I question whether it can universally be applied, what it's limitations are and what burden it places on developers for its use. Are you forced to use ray tracing? Does it only work with forward renderers, or maybe it's limited to deferred rendering? Is it like the 3DS where you get a few hard coded shader effects, but will that mean the Durango gets left behind if Orbis devs start creating more naturalistic or artistically designed shaders that don't fit the fixed function's expected parameters? Will every Durango game have the same "Durango" look? Won't it be a hassle for programmers who made their displeasure at needing to manage hetergeneus cores on the PS3 known far and wide.

Perhaps try to achieve higher efficiency so that a smaller GPU can punch above its weight ?

Is there any potential speed up if a specialized h/w can manipulate a "database" of triangles quickly ? What about raycasting/raytracing and advanced culling to save work ? Collision and animation ?

In parallel, the fast ram helps to achieve low latency.

Specialized blitter sounds handy too.
 
While I realize my knowledge is rudimentary at best, I fail to see how a DSP for sound is going to substantially improve the rendering of games on the Durango. A bit more efficient, sure. Enough to see a substantial visual difference...?

I have something of a problem with the special sauce/pseudo gpu idea. If the SRAM is not considered part of the 3... Have any of the ideas put forth taken into account that MS's history so far has been developer friendly? Someone above compared the ring-bus/blitter to Cell. Cell was far from loved by developers and MS is more dependent on 3rd party support than Sony.

I fail to see any idea presented which could make a 1.3tf GPU outperform a 3tf (even in something as vague as IQ) much less one which would be non-exotic/difficult to use. I must be missing something.

Developers hate the split memory pool, RSX's vertex setup bottleneck and Cell's primitive dev tools. But if the setup is mature and polished enough, I don't think they hate Cell necessarily. e.g., MLAA can be plonked in easily if they haven't used up the SPUs.
 
That's from seronx. I wouldn't pay any mind to his SP counts or model numbers but I think he might have some insight into the project end of things, like timelines and architectures.

I'm fully agreed,but I don't know ...but it seems that the rumors with A10 + "192GB/sec" for GDDR5 are very consistent and conjugate the "specs" of GCN2 8770 + 192GB/sec with 768SP (+ 384 A10 both at 800MHz =~ 1,8432 Tflop) seem likely.
 
His sp count for Orbis is spot on as far as I know.

I doubt any spec posted online could really be spot on at this point in regards to Orbis.

I'm fully agreed,but I don't know ...but it seems that the rumors with A10 + "192GB/sec" for GDDR5 are very consistent and conjugate the "specs" of GCN2 8770 + 192GB/sec with 768SP (+ 384 A10 both at 800MHz =~ 1,8432 Tflop) seem likely.

Yeah the only thing I'm inclined to believe is the GDDR5 and bandwidth figures because of their hints of stacking over and over again.

I think they'll push for as powerful a GPU as they can, but they have to work out how much they can get away with on a 2.5/3D stacked die. Whether or not they've figured that out yet who knows. I hope they have if they want to make it by end of the year :)
 
Last edited by a moderator:
I doubt any spec posted online could really be spot on at this point in regards to Orbis.

That's why I put as far as I know.

I trust three sources when it comes to Orbis

1. VGLeaks,
2. Certain individual on Gaf,
3. Sweetvar26

Seronx's info is consistent if not the exact same as the above three.

I doubt they can came up with the same erroneous information separately on their own.
 
I'm meh on Vgleaks. Meh on any site really. They do it for the hits. I'd rather trust dev posts and whispers from secret conference goers. And there's only one PS leaking guy on gaf AFAIK.

Proelite said:
I doubt they can came up with the same erroneous information separately on their own.

You see how fast predictions from this board get posted as confirmed specs elsewhere? lol. I'm sure 90% of them bite off other credible sounding rumors to seem more legit. Like some certain gaf people like to do.
 
On one hand we have an insider like bkilian telling us not to expect 680 levels of transistors and flops because of power and other constraints, on the other we have tales of 400mm^2+ SoCs with additional processors and other features. How can you do a 400mm die and still sell the box for console prices? And if you can, why use cheap DDR3 with a narrow bus?

p.s. can someone explain what a "blitter" would do in a modern gpu design?
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top