PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
DrJay24 & Betanumerical, you are probably right. I really really hope so.

I just want to make this little reflection.

1) We know that PS4 drivers and dev tool kits are very mature;
2) We know, many developers have already confirmed this, that PS4 is SUPER easy to develop on;
3) PS4 is mainly build with off-the-shelf components (I just mean, no Move Engine, No Esram, no audio block or other dedicated components that could need some learning curve).

Now with all these points in mind:

1) Killzone SF use ALL 18CU for rendering.
We all know that Guerilla & Killzone franchise has always mean incredible Graphical & Power showcase for all the Sony machine...
Did Killzone graphic seems 50% better than anything else shown at E3?
No, absolutelly not at least for my eyes. Many others titles look superior in many ways. And this is quite meaning as KZ has always mean super graphic.
Ok, but with this graphic does it run at 60fps? No, neither this.

2) Driverclub, an arcade racing game, is still not able to run 60FPS...

3) The Order 1886 (not a launch title!) run at 1920x800 ... NOT FULL HD!

4) Quantic Dream's 'The Dark Sorcerer tech demo, run at 1920x800 ... NOT FULL HD!

5) Unreal Engine demo, has been hugely downgraded visually.

So, I HAVE to believe that 18CU will not have such an impact on rendering and graphic compared to 12 or 14, because if it is not, I begin to be very worried that PS4 will have some other problem well hidden elsewhere!

Because if it is not, I begin to think that Cerny is a great PR man, a great Game Design consultat, but maybe as Platform Hardware Leader (as it is his first experience in this area) he is not so capable after all.

Come on, we have a 50% more graphic power but are not able to shown this superiority in anyway?

So I prefer to believe that PS4 we will not have a boost using the extra 4CU for rendering, but maybe PS4 could use this extra 4CU for more usefull thing.

Anyway, I hope to see at Gamescom an actual game that run on a PS4 that looks 50% better than anything else shown by the competition. I really do!
 
DrJay24 & Betanumerical, you are probably right. I really really hope so.

I just want to make this little reflection.

1) We know that PS4 drivers and dev tool kits are very mature;
2) We know, many developers have already confirmed this, that PS4 is SUPER easy to develop on;
3) PS4 is mainly build with off-the-shelf components (I just mean, no Move Engine, No Esram, no audio block or other dedicated components that could need some learning curve).

Now with all these points in mind:

1) Killzone SF use ALL 18CU for rendering.
We all know that Guerilla & Killzone franchise has always mean incredible Graphical & Power showcase for all the Sony machine...
Did Killzone graphic seems 50% better than anything else shown at E3?
No, absolutelly not at least for my eyes. Many others titles look superior in many ways. And this is quite meaning as KZ has always mean super graphic.
Ok, but with this graphic does it run at 60fps? No, neither this.

2) Driverclub, an arcade racing game, is still not able to run 60FPS...

3) The Order 1886 (not a launch title!) run at 1920x800 ... NOT FULL HD!

4) Quantic Dream's 'The Dark Sorcerer tech demo, run at 1920x800 ... NOT FULL HD!

5) Unreal Engine demo, has been hugely downgraded visually.

So, I HAVE to believe that 18CU will not have such an impact on rendering and graphic compared to 12 or 14, because if it is not, I begin to be very worried that PS4 will have some other problem well hidden elsewhere!

Because if it is not, I begin to think that Cerny is a great PR man, a great Game Design consultat, but maybe as Platform Hardware Leader (as it is his first experience in this area) he is not so capable after all.

Come on, we have a 50% more graphic power but are not able to shown this superiority in anyway?

So I prefer to believe that PS4 we will not have a boost using the extra 4CU for rendering, but maybe PS4 could use this extra 4CU for more usefull thing.

Anyway, I hope to see at Gamescom an actual game that run on a PS4 that looks 50% better than anything else shown by the competition. I really do!

The way in which games use the GPU is in no way a reflection on the actual power of it, and to think that the only two options that exist are that 18CU's do not give you any significant performance boost over 12 or 14, or that there is some inherent flaw in the machine is looking at it far too black and white, if you really want to compare the machines I suggest you wait until we see the same game running on both of them because comparing exclusives in various different stages of development is really not going to do any favours to anyone.

Also, I have yet to see anything touch the dark sorcerer tech demo, regardless of the resolution it was in, it was easily the most graphically impressive thing I have seen so far and by a long shot.
 
1) Killzone SF use ALL 18CU for rendering.
We all know that Guerilla & Killzone franchise has always mean incredible Graphical & Power showcase for all the Sony machine...

If you watch the video of GG discussing the project, KZ:SF was *not* aiming to be that title. The focus of the development was to have a game ready for launch - comments through the video point to a *very* tight deadline, and how they were using 'not entirely optimized' assets as a shortcut to getting things running. (there's a good argument that such a development method would be more sustainable for the industry in general...)

How tight was the deadline? The 'biggest' launch title on either console was Halo 5 - and it didn't manage it...
 
Emotional subjective observations are not going to tell you any objective information about the hardware.
Watch Dogs will be the title most likely to highlight differences. It's running on a new engine called Disrupt which Ubisoft claim is highly scaleable. I look forward seeing reading DF's analysis of this title running not only on PC and next gen, but current current gen too.

Until games demonstrate a technical or performance difference between the consoles, or not, people may as well be debating about how many fairies can dance on the head of a pin.
 
Emotional subjective observations are not going to tell you any objective information about the hardware.

I have written several specific points - objective facts... Where do you see emotional subjective facts?
I can assure you my emotional involvment, luckly, resides elsewhere.
I am so equally far from all 3 consolles (and PC) or any hardware equipment in general...
Can you affirm the same for yourself?

Also, I have yet to see anything touch the dark sorcerer tech demo, regardless of the resolution it was in, it was easily the most graphically impressive thing I have seen so far and by a long shot.

Yes, I agree with you... Very impressive, but it is just a technical demo...
Do you remember the ones from Xbox and PS2 era? Do you remember the ones on Commodore Amiga? How they compare with actual games on that platforms? Maybe we have to be a little bit more realistic...

If you watch the video of GG discussing the project, KZ:SF was *not* aiming to be that title. The focus of the development was to have a game ready for launch - comments through the video point to a *very* tight deadline, and how they were using 'not entirely optimized' assets as a shortcut to getting things running. (there's a good argument that such a development method would be more sustainable for the industry in general...)

How tight was the deadline? The 'biggest' launch title on either console was Halo 5 - and it didn't manage it...

Well, I do not know, I have not seen the video you are talking about.
And regarding Halo, I believe none was expecting Halo 5 to be a launch title..Especially if we consider when the last one has been released.
I am also surprised by 2014 release window...
 
I have written several specific points - objective facts... Where do you see emotional subjective facts?
I can assure you my emotional involvment, luckly, resides elsewhere.
I am so equally far from all 3 consolles (and PC) or any hardware equipment in general...
Can you affirm the same for yourself?

Looking at videos of E3 demos (unless you played them at E3) from games in various times in development, from various developers all with different engines, art, models, and filtering that information through your own bias is all very subjective. Even something objective like frame buffer resolution is meaningless without context, it is simply a design choice. COD wanted 60fps, they went with sub-HD, Halo 3 wanted HDR, they went with sub-HD, FM5 wants 60fps, the went with baked lighting, etc.

The best you can hope for is a multi-platform game created by the same team. But this also doesn't account for time, man power, developer tools, driver maturity, etc. But it is better than anything else.

BTW are you asking for my credentials? :rolleyes:
 
Somewhere in all this Rangers point about 40% less CPU power which appears to be the basis for the suggestion is being over looked. I seem to recall that the math for the claim has been disputed as being somewhat dubious because the additional compute doesn't fall into the type which is frequently utilized..... can someone shed some light on this?
 
also i believe there's been some discussion recently to the effect that southern islands starts to scale poorly as you ramp the cu's on pc, although i haven't personally followed that line of discussion too much...

This is incorrect. If you double all your system resources you will double performance, it's a simple as that. The only reason you won't get 50% more performance out of the PS4 over XB1 is because not everything is increased by 50%.

There's absolutely no diminishing returns (in terms of raw graphics performance) by going over 14CU's or any other number of CU's as long as the rest of the system doesn't bottleneck them.

Case in point: in pure GPU limited situations even going from 32 CU's to 64 CU's (7970GE -> 7970GE crossfire) can yield extremely close to 100% performance increase despite the driver overhead penalty of using crossfire if the game is designed to make efficient use of AFR:

http://www.anandtech.com/show/6915/amd-radeon-hd-7990-review-7990-gets-official/10
 
Watch Dogs will be the title most likely to highlight differences. It's running on a new engine called Disrupt which Ubisoft claim is highly scaleable. I look forward seeing reading DF's analysis of this title running not only on PC and next gen, but current current gen too.

Until games demonstrate a technical or performance difference between the consoles, or not, people may as well be debating about how many fairies can dance on the head of a pin.

Both Activision and Ubisoft titles are going to be a good benchmark now that it seems that Sony has been doing deals with these guys, i'm sure they'll get a ton of technical support from the ICE team. As Cerny pointed out, keeping all those development procedures they developed for PS3 for themselves instead of sharing them with 3rd parties was not the smartest thing to do.
 
Driverclub, an arcade racing game, is still not able to run 60FPS...

It's a trade-off.
They get something in return for targeting 30fps.

The Order 1886 (not a launch title!) run at 1920x800 ... NOT FULL HD!

We have the lead lead graphics/engine programmer of Ready at Dawn on this forums, go ask him if you want to know more but I doubt he will want to follow your line of thought.
Actually no, do't ask him since he has better things to do than to listen to insinuation based on bias.

Unreal Engine demo, has been hugely downgraded visually.

I suggest you to actually read what Epic said on the matter
 
Last edited by a moderator:
No, absolutelly not ... I should? ;)

Noooo, I am just reffering to the emotional involvment in these subject!

I'm emotionally sad at the decline of the tech forums. It is 80% wishful thinkers, invented rumors, fake insiders and general versus crap. ;)
 
I suspect that in the end we will find that the overall "14+4CU" or "hardware balanced at 14cu" will depend on CPU.

Probably the underpowered CPU could act as a bottleneck for the system.
And for this reason, probably, the CPU & GPU realtion will be balanced at 14CU for rendering task.
It could depend on cpu clock speed? Onion & Onion+ low bandwith? More technical stuff?

Maybe this is also the reason why MS, with a similar CPU, choses a 12CU GPU.

I have also my personal theory:as I have said many time, I suspect that Sony want from the beginning to reserve 4CU in order to compensate the weak CPU (because, as it seems, PS4 lacks of some X1 dedicated hardware, that are intended to help the CPU).

i though xbox has lower number because of esram taking space?
 
My understanding was that now days feeding jobs to the GPU, even more so with a 8 core CPU is a trivial task, this should not really be a limitation and either way, a lot of graphics work could easily run over every single CU in the GPU.

You are downplaying the CPU far too much. There will be games on consoles this generation that will be more CPU limited rather than GPU limited. There will also be games on consoles this generation that will be more GPU limited than CPU limited.

The point? There is no perfect CPU to GPU ratio of resources. On desktop you can pair up a 7770 and an i7-4770 and you'll be massively GPU limited most of the time. Pair that CPU up Geforce Titan and you'll start to run into CPU limitations in some games. Pair that up with 2x or 3x Titans and you'll run into CPU limitations in more games. Pair that 7770 to a very weak CPU and even that could potentially be CPU limited most of the time.

The only way you'd never be CPU limited on the PS4 is if the CPU was so beefy that you ended up being always GPU limited. Which wouldn't be a terribly good design. Just like the PS3 being massively GPU limited wasn't a terribly good design. The SPU's could make up for a lot of that, but imagine what developer's could have done with those SPU's if they didn't "need" to use them to bolster graphics performance.

Taking what Cerny has said, it's much better to be overprovisioned with regards to GPU than CPU. You can potentially do more with extra GPU compute resources than you could with extra CPU resources. Getting 400 more GPU FLOPS is quite significantly easier to get than 200 more CPU FLOPS even if they are less useful in a "general" sense.

Hence, they made a decision that it would be more beneficial to have an overabundance of GPU compute as you can always use that for compute if you end up being CPU limited in any given scene. And if a game requires it for whatever graphical effect they want to do, it's available for that as well.

It would not surprise me at all if the designers at Sony had done the same extensive analytics about what resources games were using, what bottlenecks they were running into, etc. in a generalized sense and came to the conclusion that in general 14 CU's (close to the 12 CU's that MS determined would be sufficient) would be perfect for the vast majority of next gen titles. Then throw in 4 more CU's for anything that developers might come up with that they couldn't factor into their analytics of current gen games being extrapolated into the future. If, in general, the generation followed their analysis pretty closely then just encourage developers to use it for compute. If it doesn't you have it there to use for graphics if needed.

Regards,
SB
 
I'm emotionally sad at the decline of the tech forums. It is 80% wishful thinkers, invented rumors, fake insiders and general versus crap. ;)

You are so right, at least we are lucky to have some people with us that can keep the bar high !;)

But I suppose that in part it is inevitable during this limbo time before the actual launch of the nextgen systems!
 
You are downplaying the CPU far too much. There will be games on consoles this generation that will be more CPU limited rather than GPU limited. There will also be games on consoles this generation that will be more GPU limited than CPU limited.

You are tying the two together as if the CPU is only feeding info to the GPU. In reality the CPU is doing mostly independent work, you could have a CPU limited game with any number of CUs. The only reason they over spec the CPU in a video card PC benchmark is to make the CPU as little of impact as possible, to truly test the breaking point and bottleneck of the card itself (GPU+VRAM).

If a developer makes a CPU limited game, how does that say anything about the usefulness of CUs who could be fully utilized no matter the number? Maybe I don't understand what you are saying, but I don't think things are as coupled as you make out. If Forza's AI bogs the CPU down and drops the frame rate to 56, does that mean MS should have used 11CUs? Seems to me it is up to the developer to eliminate the bottleneck where they exist, turn things down if using too much CPU (AI, physics, audio, etc). But as long as the CUs are working on independent rendering tasks, it could be 10CUs or 30Cus. There is no min CPU spec published with each version of an ATI cards right? That 8CU or 20CU card can be paired with any reasonable CPU. If that CPU cannot keep up with the game code, you can certainly burn the GPU on MSAA or post-processing - as long as it doesn't require the CPU.

I'd love to hear the opinion of a dev, not just us web jockies who need to read more and type less.
 
You are tying the two together as if the CPU is only feeding info to the GPU. In reality the CPU is doing mostly independent work, you could have a CPU limited game with any number of CUs. The only reason they over spec the CPU in a video card PC benchmark is to make the CPU as little of impact as possible, to truly test the breaking point and bottleneck of the card itself (GPU+VRAM).

If a developer makes a CPU limited game, how does that say anything about the usefulness of CUs who could be fully utilized no matter the number? Maybe I don't understand what you are saying, but I don't think things are as coupled as you make out. If Forza's AI bogs the CPU down and drops the frame rate to 56, does that mean MS should have used 11CUs? Seems to me it is up to the developer to eliminate the bottleneck where they exist, turn things down if using too much CPU (AI, physics, audio, etc). But as long as the CUs are working on independent rendering tasks, it could be 10CUs or 30Cus. There is no min CPU spec published with each version of an ATI cards right? That 8CU or 20CU card can be paired with any reasonable CPU. If that CPU cannot keep up with the game code, you can certainly burn the GPU on MSAA or post-processing - as long as it doesn't require the CPU.

I'd love to hear the opinion of a dev, not just us web jockies who need to read more and type less.

Pretty sure Silent Buddha is a dev :p
 
It works well in, lets say, having dual graphics cards because you only get about 70~80% of the power from the second graphics card when utilizing SLI, but there is no evidence to uphold the suggestion that any CUs past the 14 point will have less benefit to the system than any of the 4 CUs in the 14 CU array. 18 CUs will have be 28.57% more computational power than 14 CUs. It's not "it's not balanced so we evaluate it at 25% or 20%". It's 28.57% period.

I agreed with expletive's comment on this argument earlier. Resolution is VERY often associated directly with diminished returns. Devs know this even though many PC gamers refuse to accept it as true, let alone obvious. Hence so many console games ship with resolutions slightly lower than 720p (devs realize it's not worth processing all those pixels on a limited computing budget when ya can't easily see the damn things).

Maybe Sony/AMD felt that throwing more pixels on screen beyond what is typically doable in a modern game setting at 1080p just isn't worth it. Wouldn't surprise me, after all, that is the reason MS made those display planes in X1. Likewise, I'd assume at some point a certain level of AA/AF becomes basically not worth the processing effort. I'd imagine those are the things AMD/Sony may have been thinking of when considering how to balance the design.

The conclusion I've come to is that the CUs are designed with compute in mind and there will very likely be cases that some developers find doing compute with these resources will result in better results than simply using them in the old-fashioned way. And that is actually a good thing.

Agreed, but we should note that it is a matter of perspective. Most ppl's first impression of PS4's GPU was about it's purported 1.84Tflops of "power". The 14/4 issue has been conflated and misunderstood for a while now to the point that even bringing it up is taboo (even here), so it's no wonder many still hold that short sighted view that the real world comparisons to be drawn should rely on a GPU with 1.84Tflops dedicated to graphics rendering as a baseline feature. To someone coming from that pov it would seem like a downgrade, even though it isn't.

The problem occurs when a positive design like this is being spun in a manner to make it sound like a negative.

I'd argue that it was commonly conflated as suggestive of pure graphics power* from day 1 when that isn't necessarily appropriate and that this caused the issue (similar to PS4's OS reserve for RAM).

*Specifically, 'graphics power' in the sense of visual output on screen and tech graphics performance. In other words, it was being proclaimed all over the place that PS4's 18 CU's would lead to '50% better performance' than the 12 CU's in X1.

So in that sense I'd say it was a positive improvement to PS4 from a design pov, but the PR and common rhetoric surrounding the issue suddenly raises the specter of an apparent downgrade once ppl discover that not all of those 1.84Tflops will be used for actual graphics.
 
Status
Not open for further replies.
Back
Top