Predict: The Next Generation Console Tech

Status
Not open for further replies.
I dont know if it is the right place to discuss edram potential for nextgen consoles (I think a lot of threads and posts were dedicated to this in the past), but I do believe in one thing for edram : edram is worthy ONLY IF you can include enough of it for your target graphics. But thats very expensive, every time a console maker include edram in its console, its always the same problem : they want to reduce costs by avoiding implementation of fast main RAM, but they ALWAYS end up with terrible bottlenecks and problems with insufficient quantity of edram (ps2, xbox360)....so unless Microsoft come up with 128 Mb of edram, its just not worth it...

In short I prefer faster bigger general purpose main RAM anytime anywhere over insufficiant amount of edram.


What a lot of people forget ... this generation is all about massively parallel and concurrent work...

I'm guessing that there may be cases where we can more efficiently make use of this xRAM now thx to this trend..

I'll find the MS patents that pretty much make these observations..
 
If he was referring to the alpha kits that were housed in the server towers he would have been very impressed. Didn't they have 780 watt PSU?

Could be this as well. Though GPU is still supposed to be the same across kits.

Yeah, I have seen and read all of that and you have yet to show me where he changed his tune. That is what I am asking you because as far as I know he is saying that both will be close, with somethings favoring one or the other as the case may be. He even said back then that the durango gpu was weaker but I cannot recall him singing a different tune. And he is a developer mind you so they know the final specs, even if they had been working with alpha or beta kits.

?

He said right there that Durango was more powerful and he's not saying that anymore. I don't know how much clearer it can be.
 
Could be this as well. Though GPU is still supposed to be the same across kits.



?

He said right there that Durango was more powerful and he's not saying that anymore. I don't know how much clearer it can be.

My question is when did he say it? Its really a simple question actually, just point me to the post. Mind you the Durango might in fact be less powerful, but that is not what I am saying. You are implying that those in the know have "changed their tune'' sighting Lherre as one of them, so all I am asking is that you point me to it. I want to know if you are drawing your own conclusion or if that is exactly what he said.
 
My question is when did he say it? Its really a simple question actually, just point me to the post. Mind you the Durango might in fact be less powerful, but that is not what I am saying. You are implying that those in the know have "changed their tune'' sighting Lherre as one of them, so all I am asking is that you point me to it. I want to know if you are drawing your own conclusion or if that is exactly what he said.

I pointed you to the post. Xpider translated it. It was the first one I linked.
 
Let's assume for a moment that the rumors for the systems are true. We're looking at a 1.8 vs 1.2 TF GPU, that's roughly the discrepancy between a Radeon 7770 and a Radeon 7850. While the performance difference isn't as dramatic as a 7770 vs a 7970, I would say that it's definitely noticeable and I wouldn't use the words comparable to describe it.

I wouldn't except either GPU to be imbalanced in terms of the other units (TMUs and ROPs) so the way I see it, would would have to have major architectural difference (and I don't think embedded RAM in the ROP's would be enough. Would it?)

Perhaps an improved ISA for the lower flow GPU allowing for a much higher utilization of the theoritcal max? In theory, if the lower GPU can achieve a utilization of 75% of it's flops while the higher one maxes out around 50%, then they are roughly equivalent. What's a typical utilization/profile of a GPU workload on a modern GPU nowadays?

That's a unfair comparison because 7850 doesn't have only a flops advantage compared to a 7770, it has more of everything.
 
It's interesting, from the DF article the Orbis GPU is Pitcairn based but it doesn't seem as balanced as Pitcairn. Pitcairn has 154 GB/s yet this GPU has 192 GB/s. Also, Orbis' GPU loses 50 Mhz and 2 CU's and has ~330 fewer GFLOPS vs Ptcairn. So they increase memory bw but reduce its clocks? This must be for cost and heat reasons or maybe some things are not lining up right with these rumors.
 
In short I prefer faster bigger general purpose main RAM anytime anywhere over insufficiant amount of edram.

Yes, there's no denying that, however the EDRAM problems/bottlenecks in PS2/X360 were worse than what they could've been.
For example the fact the X360 EDRAM was the only memory you could render to, and you couldn't read it back from a shader, was a terrible limitation that IMHO was just bad design/planning and not inherent to EDRAM technology. It's likely that was due to late design changes (eg they planned a lot more EDRAM), and it definitely feels that way looking at the "Resolve" fixed-function HW, which again seems like an after thought. Had the "Resolve" been a bit more flexible, it could've had potentially alleviated all the current EDRAM shortcomings.
 
If they went from steamroller to jaguar when specs had 18 cus for gpu,maybe they also upped the cus for the final design with the tdp saves they got ditching steamroller. Or did the 2 gb of ram added eat all this saving?. If the chip is smaller now couldnt they end being pad limited when changing process to 20nm if they go on with the 256 bits interface and GDRR5?.
 
I think all the people conjecturing ways that Durango will make up the difference with Orbis are somewhat missing the point.

As per the leaked roadmap, MS were aiming for 6-8x the performance of the 360, the specs we have are precisely that. They didn't know what kind of power Sony was going for, and from this gen with the Wii, they know that performance doesn't equate to sales.

So this time, they're going for a console with Kinect, media, social, apps, connectivity, all tightly integrated with the Windows ecosystem as it's focus.

Hence, the specs we have are the specs, and I don't think this much hyped 'special sauce' is going to bridge the 50% TF gap with Orbis. As Bkilian has hinted, people are hugely exaggerating the three custom blocks into things like ray tracers etc. when they are far more likely to be more humble fixed function hardware like audio/video DSPs, or even this blitter.
 
This is again, jumping into conclusions.
It's prediction, which requires assumptions. I agree we should always start our sentences with "I guess" or "I think", I try and sometimes I forget. Would you please enlighten us with your prediction of the 720, what new technology will help it go beyond the latest rumor? Right now, my best guess is a large enough internal frame buffer in SRAM, which more than offset the BW deficiency of DDR3, and more ROPs. Do you believe in the Blitter? It ain't over until the "fat lady" has sung. So we can speculate about anything.
 
That's a unfair comparison because 7850 doesn't have only a flops advantage compared to a 7770, it has more of everything.

Thats why I said I wouldn't expect either of the two GPU's to be imbalanced. I would expect both console GPU's to have the appropriate number of texture units and ROPs to fully take advantage of their design.
 
Does anyone know how much memory SVO required on UE4? It's been a while since I last read about it but I couldn't find any specific figures now.
 
Since no one seems capable of keeping this thread on topic, what do you say we just close it for a while?
 
Moved a bunch of posts over to this thread: http://beyond3d.com/showthread.php?t=61753

Everyone - try to keep this discussion focused on the rumors themselves, technical discussion on merits of design, and hypothesis/prediction. The other thread linked above is fairly active itself, and can accommodate broader discussion as to what external factors might influence console design decisions/strategy for both MS and Sony. Let's try to not prompt a second lockdown here by straying off topic.
 
Can someone confirm I have this right:

Orbis is a single APU/Chip with a 1.84TF GPU + 8 Jaguar cores @1.6Ghz with an added "CU Module".

Reading some sites and a lot of people still think it's a separate GPU so want to make sure.
 
That's right, although the added CU(s) are just speculation of what the other general compute component may be. It makes sense since the current pitcairn has 20CU's, this way they can dedicate 2 to helping out the CPU... but perhaps prioritise the communication between these CU's and the CPU over the other "graphics" CU's. At this stage rumors do point to everything being in the one SoC/APU.
 
If the full die has 20 CU, I would expect they disable 2 for yield.
I thought Orbis would be 2 chips, when has this changed?
 
That's right, although the added CU(s) are just speculation of what the other general compute component may be. It makes sense since the current pitcairn has 20CU's, this way they can dedicate 2 to helping out the CPU... but perhaps prioritise the communication between these CU's and the CPU over the other "graphics" CU's. At this stage rumors do point to everything being in the one SoC/APU.

I thought it was right...cheers.

If the full die has 20 CU, I would expect they disable 2 for yield.
I thought Orbis would be 2 chips, when has this changed?

This quote from the DF article is what convinced me it was a single chip/APU:

The news that so much processing power is packed onto a single processor is highly significant to the point where credibility could be stretched somewhat

A lot of earlier rumours did say it was a APU+GPU but I guess that is just dev kits.
 
Is there anything preventing APUs from having large GPUs? If so that'd one thing that makes me think they're still on an APU+GPU path. Though I guess with tiny Jaguar cores it's probably no big deal?
 
Last edited by a moderator:
If the main system of Orbis is one APU connected to eight GDDR5 chips [or WideIO ram in final design/1st revision] with rumored bandwidth of 192 GS/s, what additional hardware can be included on theoretical "separate Backwards Compatibility cartrige" [only Cell?], and what kind of bandwidth to the APU will be needed for such external hardware [is PCIE x1 enough]?
 
Status
Not open for further replies.
Back
Top