Digital Foundry Article Technical Discussion Archive [2015]

Status
Not open for further replies.
If they choose parity for CPU, why not parity for GPU then??
They have. The visuals are the same. They just decided to set a slightly higher bar than the XB1 could match and scaled it back a bit.

What advantages I'd expect? Well, lots of NPCs (AI) in an open world game, lots of physics...shouldn't this put a stress on the CPU?
Why would XB1 show an advantage though? If they choose the number of NPCs and amount of physics that fits PS4's CPU, that'd fit XB1. There's no particular requirement to add more to XB1. If they picked a target that fitted XB1 exactly, PS4 framerate would suffer, so they chose not to and to hit the happy middle ground.

Can't sudden physics interaction spike CPU usage?
Again, if balanced for PS4, it won't show as an advantage on XB1. Plus you don't know how much is CPU and how much is GPU. But even there, it's balanced so the two games are nigh identical. There isn't a particles or visual effects advantage on PS4; simply a resolution advantage.

The game is crafted as a uniform multiplatform experience. As such it won't show any strengths or weaknesses of either platform. It'll sell on its merit as a game, not a hardware showcase.
 
The game is crafted as a uniform multiplatform experience. As such it won't show any strengths or weaknesses of either platform. It'll sell on its merit as a game, not a hardware showcase.
Frankly this how it should be done for Multi-Platform titles. Batman is a glorious experience on PS4 and Xbox. Nobody misses anything. Same thing for Far Cry 4 IMO.
 
No, gpu bound means that the bottleneck is at least in one part of the gpu.

No, it means "bad code bound".

You will never utilize a gpu 100%

Obviously you can. And easier than CPU. Just need to remember that switching a thread on GPU is free.

You can't even use the ROPs 100%

I was talking about ALU, it's obvious that you cannot 100% utilize fixed-function blocks in some cases, they are too "dumb".
 
They have. The visuals are the same. They just decided to set a slightly higher bar than the XB1 could match and scaled it back a bit.

Why would XB1 show an advantage though? If they choose the number of NPCs and amount of physics that fits PS4's CPU, that'd fit XB1. There's no particular requirement to add more to XB1. If they picked a target that fitted XB1 exactly, PS4 framerate would suffer, so they chose not to and to hit the happy middle ground.

Again, if balanced for PS4, it won't show as an advantage on XB1. Plus you don't know how much is CPU and how much is GPU. But even there, it's balanced so the two games are nigh identical. There isn't a particles or visual effects advantage on PS4; simply a resolution advantage.

The game is crafted as a uniform multiplatform experience. As such it won't show any strengths or weaknesses of either platform. It'll sell on its merit as a game, not a hardware showcase.

I have troubles following your argument. It would make sense to me if the game was locked 30Hz with no dip throughout the whole game ever. But, the game sometimes is over the frame budget as can be seen in the DF videos. Thus, the games is more taxing in some scenes than in other scenes. Thus, the game is sometimes hardware bound.

Your argument translates to the hypothesis that the game is purely GPU bound in all cases throughout the whole game and never ever is CPU bound. Otherwise, in situations were the PS4 is CPU bound, the X1 should have an advantage...right?

Why is this game never CPU bound? Is CPU such an unimportant hardware part of a console? But why did MS choose to increase the CPU hertz when it is not important for games? As I said, I can't follow your argument.

PS: Are you suggesting parity as Sony has the exclusive deal with RS?
 
Open world game with relatively lots of physics: where is the CPU advantage of the One?

I'm going off memory, and my memory is sort of funky when it comes to these things... but I'm pretty sure Warner Bros. (Rocksteady), is using the Havok physics engine for the console editions of Batman Arkham. While the farmed out (Iron Galaxy Studios) PC edition uses Nvidia's PhysX engine exclusively. I know that Warner Bros. (NetherRealms Studios), uses the Havok physics engine in conjunction with the Unreal Engine 3 that Mortal Kombat X is built on. And if I remember correctly, the Havok engine is GPU capable under OpenCL or OpenGL and possibly DirectX environments. So, the console editions maybe running most, if not all, the physics on the GPU.

My guess anyhow...
 
Because initial xbo API was very inefficient and CPU taxing. Maybe xbo apis, VM and OS are still more taxing than PS4 OS and API.

And both have completely different compilers (Clang Vs GCC) that can have some performance differences depending of the benchmarks ie. depending of the game.
 
Same here: are CPUs important for games or not? Why did MS up the CPU when the CPU power has no relevance for games in the end? I am confused now!

MS upped *both* the CPU and GPU speeds from 800/1600 MHz to 853/175? MHz from the initial spec. So probably nothing to do with the relative quality of XBone's OS or compiler (sorry to spoil the fun) and more to do with what boosts were possible without effecting yields too negatively.
 
Last edited:
Any moving of work from CPU -> GPU will be done on a cost-benefit basis.

Games are expensive to make, and many lose money The additional cost and risk (i.e. potential cost) of completely rewriting code that doesn't need it means it doesn't happen. It's not about skill or desire, it's about not wanting your studio to go under.

I'm sure it can be done, but I pity the person that has to rewrite a complex scripting engine for large scale simulation, based on 10+ years of continual development, to work on a GPU. In a tight timescale. Without impacting on development. And then debug it. And document it. And write the tools to develop content for it. And then document those. And debug them. Without affecting development.
 
Any moving of work from CPU -> GPU will be done on a cost-benefit basis.

Games are expensive to make, and many lose money The additional cost and risk (i.e. potential cost) of completely rewriting code that doesn't need it means it doesn't happen. It's not about skill or desire, it's about not wanting your studio to go under.

I'm sure it can be done, but I pity the person that has to rewrite a complex scripting engine for large scale simulation, based on 10+ years of continual development, to work on a GPU. In a tight timescale. Without impacting on development. And then debug it. And document it. And write the tools to develop content for it. And then document those. And debug them. Without affecting development.

Many studio work with Havok. It was havok which needed to rewrite their physics engine.


Naughty dog had a custom physics engine but it was difficult to maintain and it could be a problem if one day the creator of the physics engine quit the studio. And now they work with Havok like Guerrilla games and many other studios 1rst or 3rd party.

Ubi create some cloth physic technology via compute... And they use it in all their title after AC Unity.

And they said it was not an easy task to use compute but it will work for all platform Xbox One, PC and PS4...
 
Last edited:
Guerrilla write in the KZ Shadows Fall post mortem two things about compute. First they will move the raycasting for visibility from CPU to GPU and it is the most expensive thread on CPU side and it is much more efficient to do it on GPU side. On graphic side, the DOF is quater resolution in KZ SF but in next title(Horizon) they write about using a full resolution compute based DOF.
...
 
Last edited:
Media Molecule went 100% compute software rendering and some PC devs are excited by the possibility open by compute on the rendering side. There will be more and more things done via compute some graphical task and non graphical task...

https://mobile.twitter.com/JJcoolkl/status/611708755034685440

Really like the vision AMD fiji shows for the future of games: Tons of compute + memory bandwith, let innovation flourish on the SW side

Honestly, not everyone will even be using triangles anymore. It is time for GPUs to become more general purpose devices.


Matthäus G. Chajdas ‏@NIV_Anteru
@JJcoolkl ray-trace all the distance fields
 
Last edited:
Games are expensive to make, and many lose money The additional cost and risk (i.e. potential cost) of completely rewriting code that doesn't need it means it doesn't happen. It's not about skill or desire, it's about not wanting your studio to go under.

It was ok to rewrite in previous console generations (new architecture and stuff). What happened now? (please, do not try to sell me that it's expensive because "everybody says so", prove why it should be expensive).
 
It was ok to rewrite in previous console generations (new architecture and stuff). What happened now? (please, do not try to sell me that it's expensive because "everybody says so", prove why it should be expensive).

Actually, not everything was rewritten at the start of every previous generation, certainly nothing as bold as moving everything from single threaded CPU to massively multithreaded GPU.

I don't need to prove that it's expensive and subject to a cost/benefit consideration. You know it is - at least you do if you have any contacts in any developer of remotely significant size.
 
MS upped *both* the CPU and GPU speeds from 800/1600 MHz to 853/175? MHz from the initial spec. So probably nothing to do with the relative quality of XBone's OS or compiler (sorry to spoil the fun) and more to do with what boosts were possible without effecting yields too negatively.


And what about this quote by a MS tec engineer:

Interestingly, the biggest source of your frame-rate drops actually comes from the CPU, not the GPU

Read this for more insight into X1 tec-choices and the quote:

http://www.eurogamer.net/articles/digitalfoundry-vs-the-xbox-one-architects
 
certainly nothing as bold as moving everything from single threaded CPU to massively multithreaded GPU

How is it more bold than moving from PS1 to PS2, for example?

I don't need to prove that it's expensive

You do, because obviously current dev budgets are inflated (in big companies).

Different economy and people tend to learn from past mistakes?

Which mistakes? Making great games in the PS2 era? :)

P.S. to not derail this thread anymore: http://www.slideshare.net/reed2001/culture-1798664
That's how successful companies work: small amount of very skilled people that strive for excellence. And 99% of current AAA creators (maybe even everybody, I think only ND had some of that culture) are built by MBAs with "make games using a lot of unskilled workforce" (and same thing in Software Industry as a whole).
 
Last edited:
It was ok to rewrite in previous console generations (new architecture and stuff). What happened now? (please, do not try to sell me that it's expensive because "everybody says so", prove why it should be expensive).
I don't know much but it would sound safe to assert the the code base is way bigger to begin with, a lot more systems working with one another. The pace of iterations, yearly for some franchises also call for trade off, something easier to maintain that can resist turn over, etc.
Some engines or parts are used by multiple teams, you need documentation, maintenance again, etc.
Overall it is about economics, how you manage projects and teams which size have grown exponentially. Engine and how to are millions dollars assets to companies that deal with them as such.
 
Status
Not open for further replies.
Back
Top