NGGP: NextGen Garbage Pile (aka: No one reads the topics or stays on topic) *spawn*

Status
Not open for further replies.
GDDR3 was up to 2400 mHz a few years back. And you can get that g-spec DDR3 that acts like GDDR3 - I wonder if it was really necessary to top out the main memory at 2133 mHz just to avoid more expensive GDDR5 on its phatter bus.
 
Its not worse than people acting as if Durango only has exactly 68 GB/s of BW and nothing more (I believe Genz just did that)

Ignoring that the EDRAM or whatever will help quite a lot.

I look at it like this, Xbox had about half the BW of PS3, one 20 GB bus vs two roughly, yet it did fine if not even better in BW cause of EDRAM. (It seemed you had more half res effects on PS3 games though both consoles suffered)

What do you know, Durango has what looks to be (very roughly) about half the main BW of Orbis, and some EDRAM...but more and presumably much more flexible.

I dont know why things might not possibly work out similarly.
 
We have no idea about the RAM setup, so it could well be that the bandwidths do just add up. Or not. TBDR rendering to tiles in eSRAM giving 110 GB/s draw and render BW. Texture from DDR3, plus everything else on that bus (Orbis's 170 GB/s won't be just graphics; there's the whole of the game and audio and IO and stuff acting on that bus too). Then again, Durango may only be able to render from ESRAM and everything has to be copied to/from DDR3, meaning 60 GB/s writing to and from ESRAM leaving on 40 GB/s rendering BW.

We Don't Know!
Anyone drawing conclusions, including whether to sum the BW or not, is just making shit up.

It pretty predictable that there would be some who would be pretty happy with the narrative that Orbis > Durango and would therefore be pretty agitated when that validity of that narrative is called into question. It's equally predictable that there would be some who would be so bothered by the narrative that Orbis > Durango that they will grasp at any opportunity to disrupt that narrative, no matter how fanciful.

If you think we know everything there is to know about these machines based on these leaks and that 1.8 vs. 1.2 and 170GB/s vs. 68GB/s with 102GB/s is all we'll ever need to know to compare the performance levels of Orbis vs. Durango, you may want to consider whether you are really informed enough, both by the information that has been leaked and by your personal knowledge of the function of the hardware involved and the software algorithms that will run on it, to be confident in that conclusion.

If you are convinced there is some missing equalizer in Durango then maybe consider the possibility that that conviction isn't based on objective reasoning but instead is based on your hope for it to be true.
 
If you are convinced there is some missing equalizer in Durango then maybe consider the possibility that that conviction isn't based on objective reasoning but instead is based on your hope for it to be true.

True, but by the same token there seem to be more who downplay at every turn any possible weakness in PS4. 4 CU's dedicated to compute and unable to help gfx greatly? No, that cant possibly be true. A core reserved for OS, or another disabled for yields? No way, theres no need to do that at all. And there have been MANY posts speculating that somehow, there is extra ARM cores or DDR3 or LPDDR pools so the PS4 magically doesnt need any OS resources, or it's like Vita, the OS scales to nothing for games. It's quite fanciful at times.
But I have not seen any of these so interested in journalistic integrity with regard to Durango special sauce attacking those posts.


I really see few pointing at special sauce at this point and many more attacking the idea of it than touting it.

the news that orbis may have some cu's crippled for gfx and dedicated to compute, or that durango may have better triangle throughput, these infos point out to me just how in flux we are regarding the hardwares.

then again, i dont see the point of not discussing them either, since if we wait until final hardware, we might as well shut this board down for ten months.
 
There is a huge difference if someone asks "Is it possible that Orbis will use ARM core for OS and DRM?" and someone answers "Yes, could be a smart move", and if someone presents himself as an insider and says "Don't worry, Durango will have special sauce for this!".

Can you see the difference? The first one is a simple discussion based on assumptions. You can easily join this discussion and say "I don't think it's a good idea because...". The second one is a prediction. Certain persons use their authoritative status for predictions that can't be disproved since these predictions are lacking concrete information.
 
Its not worse than people acting as if Durango only has exactly 68 GB/s of BW and nothing more (I believe Genz just did that)

Grenz. And I did no such thing. Xenio asked why you'd have to copy data between the embedded memory and the main memory and I was just illustrating "that's how it works" by illustrating the consequences of not doing so. In order to gain benefits from the embedded memory you have to either be writing to or reading from the embedded memory. And you won't have anything to read or write if at some point you don't copy in new data from main memory or copy out results to main memory. Moving data between the pools is a necessary component of using both effectively. Nonetheless, this entails a certain amount of overhead that does not effect an actual UMA design (like Orbis). Obviously moving data back and forth is going to net you far better results than completely ignoring either pool of memory, but moving data around still consumes bandwidth, with or without "Data Movement Engines".
 
Load assets from DDR3, render to ESRAM. No asset copying is required. That's one extreme. The other is have to copy everything to ESRAM prior to the GPU using. Reality fits somewhere between those two.
 
True, but by the same token there seem to be more who downplay at every turn any possible weakness in PS4. 4 CU's dedicated to compute and unable to help gfx greatly? No, that cant possibly be true. A core reserved for OS, or another disabled for yields? No way, theres no need to do that at all. And there have been MANY posts speculating that somehow, there is extra ARM cores or DDR3 or LPDDR pools so the PS4 magically doesnt need any OS resources, or it's like Vita, the OS scales to nothing for games. It's quite fanciful at times.
But I have not seen any of these so interested in journalistic integrity with regard to Durango special sauce attacking those posts.

What we have is a situation where people on one side of the divide say, "that's not what the rumors indicate" and the people on the other side who say "maybe there's secret magic inside!" It's not hard to suss out which of those parties to be dismissive of. This isn't a political campaign. We don't have to be fair and balanced or offer equal air time. If your argument requires secret technologies that violate the laws of physics and mathematics we have no reason to take them seriously.

I think your main problem is there aren't really any drawbacks to the Orbis design as we understand it. Most of your list are literally the invention of people who began with the premise that Orbis can't be more powerful than Durango and, at the same time as they invented theoretical advantages Durango may have, concocted theoretical deficiencies in the Orbis design. There are no credible rumors to that effect, only the idle speculation of partisans.

As for the "fanciful" PS4 theories. I haven't seen anyone insisting Orbis must contain secret ARM cores or additional OS memory. Mostly it's just drive byes from people wondering why Sony didn't add special OS memory. Anyone suggesting that seriously is a fool, and I have on multiple occasions attempted to disabuse them of the notion. Adding LPDDR in some crazy place is neither cheap or easy, nor is it even desirable.
 
Load assets from DDR3, render to ESRAM. No asset copying is required. That's one extreme. The other is have to copy everything to ESRAM prior to the GPU using. Reality fits somewhere between those two.

Well, if you aren't consuming peak bandwidth when you write to the embedded memory in that situation (say for a 30fps 1080p game), the left over is essentially "wasted" (to the tune of something like half your ESRAM bandwidth). And it's unlikely all your buffers will fit in 32MB if you're going for 1080p anyway. If it works like the 360, your final frame still need to end up in main memory before it can be displayed, too. Plus how are you supposed to use the ESRAM as GPGPU special sauce if you never copy working data in?
 
Actually it was going pretty well until all the "well this guy said this and that guy said that and like omg can you believe the nerve of some guys to say that stuff and how can some of you believe that stuff!!!" started.

But maybe that's what this thread is for.
 
Crazy philosophical question, but since ultimately i consider the new consoles as very similar set top boxes capable of playing pretty much the same games, what exactly is stopping Sony and MS from going to AMD and ask for the same technology, then put the same amount of RAM in the boxes and release the darn things?
I mean, all this talk of 'more RAM but less BW' and 'more GPU less CPU'... What is the bloody point?
It's more expensive for them, it's more expensive for the third party developers who in the end still have to develop different versions of the same games, and work hard at it to make them look as similar as possible... it ends up being more expensive for us (both hardware and software).
What is the point? Apart from obvious marketing 'my dick is bigger than yours' campaigns and resulting fanboyism? Is the idea of having a 'more powerful console' really worth the increased costs of development, both hardware and software?
 
Crazy philosophical question, but since ultimately i consider the new consoles as very similar set top boxes capable of playing pretty much the same games, what exactly is stopping Sony and MS from going to AMD and ask for the same technology, then put the same amount of RAM in the boxes and release the darn things?
I mean, all this talk of 'more RAM but less BW' and 'more GPU less CPU'... What is the bloody point?
It's more expensive for them, it's more expensive for the third party developers who in the end still have to develop different versions of the same games, and work hard at it to make them look as similar as possible... it ends up being more expensive for us (both hardware and software).
What is the point? Apart from obvious marketing 'my dick is bigger than yours' campaigns and resulting fanboyism? Is the idea of having a 'more powerful console' really worth the increased costs of development, both hardware and software?

Because all it is then is a PC that developers can program closer to the metal on that has an always on media center application. They may as well just launch live and psn as competing steam services.

It's still an area where hardware differentiation can be monetized, and thus, it's in their best interest to develop the most appealing console to developers and consumers to maximize profit.
 
Crazy philosophical question, but since ultimately i consider the new consoles as very similar set top boxes capable of playing pretty much the same games, what exactly is stopping Sony and MS from going to AMD and ask for the same technology, then put the same amount of RAM in the boxes and release the darn things?
I mean, all this talk of 'more RAM but less BW' and 'more GPU less CPU'... What is the bloody point?
It's more expensive for them, it's more expensive for the third party developers who in the end still have to develop different versions of the same games, and work hard at it to make them look as similar as possible... it ends up being more expensive for us (both hardware and software).
What is the point? Apart from obvious marketing 'my dick is bigger than yours' campaigns and resulting fanboyism? Is the idea of having a 'more powerful console' really worth the increased costs of development, both hardware and software?

Then perhaps Google and Samsung will do the same. I suspect MS and Sony would want full control of their ecosystems.
 
Because all it is then is a PC that developers can program closer to the metal on that has an always on media center application. They may as well just launch live and psn as competing steam services.

It's still an area where hardware differentiation can be monetized, and thus, it's in their best interest to develop the most appealing console to developers and consumers to maximize profit.

But they're already pretty much PC's that developers can program closer to the metal. Problem i'm talking about is that instead of having one type - differentiated by first party games and obviously different services available on each consoles - they are still trying to make them different. I'm not sure i believe that in this day and age it's still worth the trouble.
 
Crazy philosophical question, but since ultimately i consider the new consoles as very similar set top boxes capable of playing pretty much the same games, what exactly is stopping Sony and MS from going to AMD and ask for the same technology, then put the same amount of RAM in the boxes and release the darn things?
I mean, all this talk of 'more RAM but less BW' and 'more GPU less CPU'... What is the bloody point?

Looking at the specs, I think both Sony and MS agree with you... and I wouldn't be surprised if a key reason was that Sony/MS both asked the same people "what sort of performance you want to see in generation+1?"...

*BUT* Sony and MS are building different devices because they have different strategic aims.

The Wii was built to be a 'home console' for the whole family, pushing the new gadget.
The PS3 was built to push blu-ray and be a home entertainment center.
The 360 was built to be a 'pure' gaming machine.
The Wii-U was built to try to cash in on the tablet wave and the (old) popularity of the Wii.
The Durango seems to be aiming to push kinect and be a home entertainment center.
The Orbis seems to aim to be a 'pure' gaming machine (maybe?).

In truth, the 2 consoles are strikingly similar anyway - practically the same CPU and GPU...
 
But they're already pretty much PC's that developers can program closer to the metal. Problem i'm talking about is that instead of having one type - differentiated by first party games and obviously different services available on each consoles - they are still trying to make them different. I'm not sure i believe that in this day and age it's still worth the trouble.

Because the ROI on NRE is greater than the NRE itself. If they were to do what you suggest, there would likely have to be some sort of partnership. Would they have to agree on APIs and firmware updates? What if one of them was ready to release a new console? Would the other have to be allowed access to that and demand they shipped the same time?

Moreover, what benefit does it have to the consumer? If they're guaranteed to have the same hardware, what's to prevent them from specing Wii U class hardware since they only have to worry about software issues? It's obviously cheaper to build such a console and they can likely start making a profit quicker. The competition is what drives greater specs. Nintendo doesn't feel they're in direct competition with Sony or MS so they don't try and spec their hardware as such.
 
I mean, all this talk of 'more RAM but less BW' and 'more GPU less CPU'... What is the bloody point?

With an infinite amount of financial resources you can target 'more RAM and more BW' as well as 'more GPU and more CPU' at the same time, but if you want to make some profit in the future you have to compromise.

If one of the two console manufacturer says "We're goint to pack some fancy peripheral feature with each console" and he doesn't want to suffer heavy losses, then he has to slash the budget elsewhere, computing power for example. It's all about which audience you're targeting. Nintendo did the same thing: The Wii was highly inferior in terms of processing power compared to PS3 and XBox360 but they sold more units than Sony or Microsoft, it was a great avant-garde console approach.

The obvious question is why do consoles have to be on the exact same level in terms of processing power anyway? I can very well imagine three different kinds of philosophies for the next gen: A casual console on the one hand, a straight forward core gaming console on the other hand, and a jack of all trades console in the middle. At the end of the day the only thing that counts for Sony, Microsoft and Nintendo is profit. As long as the money's rolling in they don't care if someone's disappointed with the technical specs.
 
If one of the two console manufacturer says "We're goint to pack some fancy peripheral feature with each console" and he doesn't want to suffer heavy losses, then he has to slash the budget elsewhere, computing power for example. It's all about which audience you're targeting.

I agree, but it is "risky", you can't ignore a market as "hardcores", just look the Kinect games sales, it can't compare to Halo, Gears or Forza, while the best Kinect game sales are ~2M (just 2 or 3 games), the best core game sales are 4~5M.

If "core" gamers end choosing PS4, the third parties will not be happy.
 
Status
Not open for further replies.
Back
Top