Digital Foundry Article Technical Discussion Archive [2013]

Status
Not open for further replies.
"Wildly unbalanced" is certainly too strong, but the comments made by Cerny himself seem to clearly indicate that the PS4 is ALU-heavy and that those ALUs are likely to be poorly utilized if they aren't being fed GPU compute tasks on top of the traditional rendering workload.

The question then becomes: how are multiplatform developers going to be incentivized to do extra development to implement these extra compute tasks if they can expect roughly equivalent performance across platforms without it?

How about somewhere between unbalanced and too unbalanced :)

Obviously one big thing we don't know is how much "compute " is in the xb1 (or maybe we have a general idea but I don't know it) since if there is a significant amount then it maybe something that both systems exploit and as such more compute doesn't become a hindrance when it comes to "balance"

Of course PCs enjoy a bit of compute as well although not well used afaik, forward going using some more compute on the PC side might minimize pcie limitations and as such might add to the compute incentive.
 
Richard says that xbox one is more balanced system than the ps4 -



Why would the ps4 would be an unbalanced system without the GPGPU customizations ?
How the Xbox one is more efficient than ps4 as richard states with the data move engines and SHAPE audio ?

This article reads like a textbook example of confirmation bias. If that's not Leadbetter's natural state I can only assume his inside sources at Microsoft are so far up his ass at this point he has become a human puppet.







It's not. He's just wrong and grasping at straws to downplay any PS4 advantage. Ooh, Microsoft profiled games looking for bottlenecks. I'm sure that never occurred to Sony. Weird how it is so much harder to find a bottleneck in the ps4 design than the Xbox One's though. Wonder how that happened? Oh well, here are some random and meaningless benchmarks designed specifically to nullify the PS4's hardware advantages!
Well I think the article doesn't present properly the 2 systems but I could see on which basis he is writing what he is writing.

How I read it is in the light of reviews made by the TechReport or now PCperspective and the usual "soft V-sync " used in console games.
If you want to sustain 30FPS, you need most of your frames to be rendered in 33ms or less.
The next step is 16ms, that is a huge jump. Say at the same setting the durango sustain 30FPS, so 33ms a frame, and the PS4 achieves 25ms, it makes no difference.

Now you should be able to push further on the PS4, CPU bottleneck should be same on both system, I'm not sure about how the xb1 is more "balanced".
It seems tailored to render in between 720p and 1080P without MSAA. On the other hand the PS4 should definitely handle AA and could have room wrt resolution.

One can compare for example the review of the HD7850 and 7870 on techreport and the one comparing the 7850 and the 7790. looking say at BF3, on will see that the HD7790 matches the hd7850 but the setting are not the same High vs ultra, x4 MSAA vs none.
Then you have performance scaling on GPU, it does not scale linearly with the CUs count and even bandwidth, I guess it would if you were to scale the resolution accordingly.

So I read it as the xb1 maximize the "bang for bucks" as the jump for high to ultra on average is not easily spot, and post process types of AA do the job. The ps4 can do more but they went further than capturing the low hanging fruits.

My self I think that the XB1 is a bit low and looking at hd7770 vs hd7790 on see that the former seems limited in by its shader throughput more than by anything else (those cards are usually tested without AA, not the highest setting, @1080p, etc.).

It is easy to say but I still think that Sony when they decided to increase the RAM went a bit too far, 6GB was fine and as Durango specs were mostly known they should have salvage their chips:
16CUs, 3 RBE partition, one memory controller disabled. If durango has anything success it should hold back their system in multi-platform games that is still the bulk of the production.

Anyway, I used to think that Sony should go a pretty cheap system before specs were known, I still do. Whereas Sony have a nice PR campaign they were the ones who got a massive knee jerk reaction, they increase the RAM amount to match durango, which means the matching effort on the OS are more a project than anything else, they even gave up on Eye toy which it seems were set to ship with every unit. Imo they got scared and decide to match and exceed MSFT on hardware on every aspect at the cost of Eyetoy. They even cut the price to not have to go head to head with MSFT.

They should have gone even cheaper, lower specs, with eyetoy or not, with an aggressive price reduction scheme and a system easily mass producible and try to go with a blitz type of strategy.
Now they might success but I'm not sure about how that will fare for their financial sheet but that is another matter.
 
Last edited by a moderator:
How about somewhere between unbalanced and too unbalanced :)

Obviously one big thing we don't know is how much "compute " is in the xb1 (or maybe we have a general idea but I don't know it) since if there is a significant amount then it maybe something that both systems exploit and as such more compute doesn't become a hindrance when it comes to "balance"

We know exactly how much compute potential is in the Xbox One: 1.2TFlops. The fault is assuming Xbox One is the "balanced" design which would make the PS4 "overbuilt" somehow. For all any of us know the PS4 is balanced and the Xbox One is underbuilt.

Frankly, the original Cerny quote is probably actually referring to the fact that the vast majority of the performance potential in the PS4 is locked up in the GPU. Coming from a PS3 where you had 200GFlops in the CPU and 250GFlops in the GPU, the PS4 would, comparatively, be more lopsided. But that's just as true of the Xbox One coming from that perspective.
 
We know exactly how much compute potential is in the Xbox One: 1.2TFlops. The fault is assuming Xbox One is the "balanced" design which would make the PS4 "overbuilt" somehow. For all any of us know the PS4 is balanced and the Xbox One is underbuilt.
Well I agree on that the XB1 is indeed a bit "underbuilt", I used to give some credits to the rumors that spoke about an APU+HD6670 CPU which throughput was indeed in the same ballpark as Durango.
The idea that such a low power hardware (even in another form aka a single SoC) could make it in one in one next gen console were mostly mocked. The point is that even if I though it was making sense at that time (quite a while ago) we were speaking of 40nm process, now at 28nm it is low even by my conservative standards.

Now the PS4 is not what I would call "balanced", I used to have a lot of concern wrt Sony financial situation, it got barely better, now they went up to 8GB of expensive RAM, a late addition with most likely lagging effort on the OS side of thing. I would not be too surprised if that jump made up Sony mind wrt to the paywall. I'm still curious about Sony financial fact a couples of months after launch.

Another thing is that Sony asks money now for online (though it comes with free games but MSFT could match that) it better have a great service, MSFT will that sure they have work on that it seems quite a lot from the cloud to the system software. Sony just moved from 256/512MB to ~3GB, in my world the thing at launch and for a while should not be as mature as MSFT offering and integrated cloud based services (/work in progress).

I stop here that talk goes further than DF article or even hardwares at hand.
 
Last edited by a moderator:
We know exactly how much compute potential is in the Xbox One: 1.2TFlops.

Strictly speaking it was somewhat higher than that when the xb1 had the online requirement and hence could count on cloud computational resources being available universally. After they neutered the machine by removing the online requirement that all went away of course, but presumably the original physical + virtual setup influenced their design of the machine in how much computational power was built into the box and how much was available virtually, with the virtual side being expandable as the years went by.


You can't have an excess of compute power, because that compute power can always do other jobs.

If only people could have made that connection with standardized cloud computing. Sigh.
 
Last edited by a moderator:
Strictly speaking it was somewhat higher than that when the xb1 had the online requirement and hence could count on cloud computational resources being available universally. After they neutered the machine by removing the online requirement that all went away of course, but presumably the original physical + virtual setup influenced their design of the machine in how much computational power was built into the box and how much was available virtually, with the virtual side being expandable as the years went by.




If only people could have made that connection with standardized cloud computing. Sigh.

No, the check was only once every 24 hours, the Xbox One itself never required a constant Internet connection like an MMO would. As such, any game using the cloud was always going to be built with fault tolerance from your Internet cutting out. Don't confuse that with a few select games that were always going to be online like Titanfall and Destiny, which if you're internet drops, your game stops.
 
Mod : djskribbles is joining him, because saying, "I thought we weren't supposed to be talking about this," doesn't excuse you from replying an OT.
 
CUs wont scale 1:1 to performance? Breaking news? They can put move more stuff from CPU to GPU at some point of the generation if the game is CPU bound

Honestly i think outside launch games which are pretty much DX11 ports the PS4 API (and maybe the bandwidth available to GPU) will be a bigger deal than CUs. Thats the only way PS3 ever got close to 360 multiplatforms
 
The next person to mention bias by GAF or DF or IGN or anyone else, in this or any other thread, is going to get a two weeks vacation on me. Every other discussion seems to be about the messengers rather than the messages. Furthermore, the PS4 vs XB1 debate has already been hit on the head (twice?) and this thread has to be very wary of getting axed if it's going to undertake such a comparison. I won't call it just yet, but given various unknowns, I think it's best to wait until at least launch until we start into that debate.

Friendly reminder to thank Shifty Geezer for his generous handouts of Vacation time. I really would like to know where Shifty has managed to get all this vacation time from to be able to hand it out so liberally. :LOL:
 
Friendly reminder to thank Shifty Geezer for his generous handouts of Vacation time. I really would like to know where Shifty has managed to get all this vacation time from to be able to hand it out so liberally. :LOL:

Shifty runs the Time and Attendance unit of b3d. He's offered me comp time before too. :LOL:

The one lesson of the DF article and even of the first two dev kits MS sent out is that it is very hard to quantify the real world improvements ESRAM delivers in the gcn architecture. Straightaway comparisons to discrete cards would make it seem like its fares closer to a 7790. But the fact that they put ESRAM on the GPU die is apparently yielding desirable benefits to the rendering pipeline that's putting well beyond that chipset .

Didn't MS use 7970s in the alpha and beta kits. That's a good indicator of the performance MS is targeting. The DF article lowballed even that.
 
No, the check was only once every 24 hours, the Xbox One itself never required a constant Internet connection like an MMO would. As such, any game using the cloud was always going to be built with fault tolerance from your Internet cutting out. Don't confuse that with a few select games that were always going to be online like Titanfall and Destiny, which if you're internet drops, your game stops.

I'm not confusing it with anything, a single simple 24 hour check was all that was needed to have cloud be treated as standard issue across the client base. A persistent connection was never a requirement for that whatsoever. It's possible that they based the xb1's console design around that, start with a basic built in setup that would be descent for a year or two and then augment it with virtual resources over time as code get's re-architected to work across the latency heavy internet.
 
... It's possible that they based the xb1's console design around that, start with a basic built in setup that would be descent for a year or two and then augment it with virtual resources over time as code get's re-architected to work across the latency heavy internet.

Precisely zero lines of code will be 're-architected'm to deal with latency. Real time systems hate nanoseconds seconds of latency let alone the dozens of milliseconds the net brings to the table. The cloud is a hosted server any other implementation is pure pie-in-the-sky so far, would love to hear about 'em but so far bupkiss

IMO the DF article was making a somewhat obvious point to my mind, perf has never scaled in a linear fashion with anything except clockspeed (for any given architecture). For all the blather of multi-core, raw GHz is still a better investment than more cores particularly for 3D rendering and I see no reason why SPUs would differ from CPUs in that regard

Personal prediction: Thread lock in 12 posts. It seems 'teh bias' crowd has taken over :(
 
Precisely zero lines of code will be 're-architected'm to deal with latency. Real time systems hate nanoseconds seconds of latency let alone the dozens of milliseconds the net brings to the table.

I don't want to take this thread over with cloud so I'll just say that I wasn't thinking of the realtime render pipeline, I was thinking of other code that isn't millisecond dependent that can be shifted virtually. You are right though, nothing will get re-architected now that's it's not standard, we'll have to wait for next-next gen to see what could have been had this gen.
 
Straightaway comparisons to discrete cards would make it seem like its fares closer to a 7790. But the fact that they put ESRAM on the GPU die is apparently yielding desirable benefits to the rendering pipeline that's putting well beyond that chipset

Do you have any evidence to back that up or is it pure assumption?

Didn't MS use 7970s in the alpha and beta kits. That's a good indicator of the performance MS is targeting. The DF article lowballed even that.

No it's terrible indicator since it's blatantly 3x bigger than the GPU in X1. Dave Baumann has already more or less said the reason the dev kits held such GPU's is that these were the first GPU's to market with the GCN architecture and thus the only option for the dev kits within the desired timeframe.
 
Of course PCs enjoy a bit of compute as well although not well used afaik,

PC's can have compute potential far in excess of either console depending on the GPU used. Where the limitation in relation to the consoles comes in is high latency communication between the CPU and GPU means that only fairly latency tolerant jobs can be performed on the GPU and if you've got many, many different compute jobs being performed at the same time, then the PS4 at least should be able to schedule them more efficiently (in comparison to current generation PC GPU's).

forward going using some more compute on the PC side might minimize pcie limitations and as such might add to the compute incentive.

PCI-E limitations are the barrier to using more GPU compute on the PC so this statement makes no sense. It's possible that PC's may not need to rely too heavily on low latency GPU compute capability moving forwards on account of having much more powerful CPU's. Where the PS4 may turn to GPU compute to run certain latency sensitive tasks, PC developers may choose to run those same tasks on the CPU while leaving the latency tolerant tasks (physics simulations like TressFX for example) to run on the GPU.
 
I don't want to take this thread over with cloud so I'll just say that I wasn't thinking of the realtime render pipeline, I was thinking of other code that isn't millisecond dependent that can be shifted virtually. You are right though, nothing will get re-architected now that's it's not standard, we'll have to wait for next-next gen to see what could have been had this gen.

I see no reason whey they can't put "online required" on games or start putting a cloud symbol on the disc boxes similar to "1-2player" "online" etc.

It won't be as fast of an adoption as it would have been but they can still get it done this gen.
 
PC's can have compute potential far in excess of either console depending on the GPU used. Where the limitation in relation to the consoles comes in is high latency communication between the CPU and GPU means that only fairly latency tolerant jobs can be performed on the GPU and if you've got many, many different compute jobs being performed at the same time, then the PS4 at least should be able to schedule them more efficiently (in comparison to current generation PC GPU's).

yes a few ACEs in the hole so to speak ( well hardware arbitrated command queues more accurately but I couldn't leave that pun alone :oops:)

PCI-E limitations are the barrier to using more GPU compute on the PC so this statement makes no sense. It's possible that PC's may not need to rely too heavily on low latency GPU compute capability moving forwards on account of having much more powerful CPU's. Where the PS4 may turn to GPU compute to run certain latency sensitive tasks, PC developers may choose to run those same tasks on the CPU while leaving the latency tolerant tasks (physics simulations like TressFX for example) to run on the GPU.

Sure I was sloppy there since I was thinking more in terms of offloading "compute" data to local memory on the card since more memory may be available as time goes on. More data on card less pci-e traffic but besides being sloppy I was tapping on a smartphone and loathe to get to in the weeds at that point. Of course there is latency even with local memory and that could be an issue so as to knock that idea down the list things you would do with "compute". In any case PC compute solutions will impact console compute and visa-versa. One reason CUs aren't exactly the new SPUs

Your point about latency tolerance reminds me of the Power of the Cloud :smile: Maybe PCs will be hanging compute modules off of ThunderBolt connectors someday :LOL:
 
We know exactly how much compute potential is in the Xbox One: 1.2TFlops. The fault is assuming Xbox One is the "balanced" design which would make the PS4 "overbuilt" somehow. For all any of us know the PS4 is balanced and the Xbox One is underbuilt.

Frankly, the original Cerny quote is probably actually referring to the fact that the vast majority of the performance potential in the PS4 is locked up in the GPU. Coming from a PS3 where you had 200GFlops in the CPU and 250GFlops in the GPU, the PS4 would, comparatively, be more lopsided. But that's just as true of the Xbox One coming from that perspective.

Said in response to a question about VGleaks' "14+4" reveal

Mark Cerny said:
That comes from a leak and is not any form of formal evangelisation. The point is the hardware is intentionally not 100 per cent round. It has a little bit more ALU in it than it would if you were thinking strictly about graphics. As a result of that you have an opportunity, you could say an incentivisation, to use that ALU for GPGPU.

I don't think your interpretation makes much sense given this pretty clear statement on the subject.

Having said that, I agree that this also doesn't make the PS4 architecture "unbalanced" as that will ultimately be dependent on the workloads it is going to be tasked with in next generation games. Whether those workloads will trend towards a heavy enough use of GPU compute for the PS4 architecture to achieve high utilization of its ALU resources is an open question, though.
 
I see no reason whey they can't put "online required" on games or start putting a cloud symbol on the disc boxes similar to "1-2player" "online" etc.

It won't be as fast of an adoption as it would have been but they can still get it done this gen.

Well my issue is getting the developers to actually put research and coding time towards a feature set that is now optional.

For example, what I've always thought would be really cool in a game is having an ai help you out. So you arrive at a situation in a game and you can ask your ai anything like "Computer, what are my odds of survival" or "Computer should I attack the tank or flank the soldiers", or "Computer, how do I make it up to that ledge?", etc, and it replies in unique and interesting ways each time. With kinect and cloud guaranteed they could use the mic on kinect to catch your voice and send the recording along with the current game state to the cloud servers, the cloud servers parse your voice and the situation, and send back a reply for the ai to respond to you. It would effectively be a mix of Apple's Siri and Halo's Cortana all wrapped in one. It's one of those neat things that makes the little fixed console appear to do more than it's really capable of since rather than sacrifice a ton of memory and cpu power locally to handle this all, don't bother just reserve the local computational power for all the realtime stuff and let the cloud handle this task which can tolerate many seconds of latency and still be totally cool. And being cloud means it could be a learning ai, learning more and more from all the collective things gamers ask it and improving it's responses over time.

That is something that could be purely optional in a game, so if your internet connection was off or bad at the time then your ai battle computer could just respond with canned responses like "computer is offline" or "bad data connection, syntax error", or some other clever line. However if your internet connection was working then you'd have a cool ai partner with you on any game. But would anyone bother dedicating time and money to developing something like this with cloud being optional? Or instead would they just pass this gen and revisit it next gen 5 to 8 years from now when internet connectivity was guaranteed? These are the types of things that I think would have breathed new life into games, things that are both very heavy on cpu and memory use yet very tolerant of latency even if its many seconds long.

Hmm maybe this post belongs in the server augmentations thread, I guess move it there mods if that makes sense, I don't want to pollute this thread if it's the wrong place for it.
 
Status
Not open for further replies.
Back
Top