...I don't think nearly enough credit is given to the experience of failure...
You should consider to write a book about it
Last edited:
...I don't think nearly enough credit is given to the experience of failure...
There would be a lot. Every one a valuable lesson!You should consider to write a book about it
There are two problems with this. The first is that compute shader GPU programming is very fragile regarding to performance. You need to know exactly how the GPU operates in order to write fast code for it. Slow GPU code can easily be 10x+ slower than optimized one (stealing most of the GPU resources from the rendering). Game play programmers in general do not know low level GPU details well enough, and do not have experience of the GPU programming languages used (HLSL). Game play programming teams tend to have more junior programmers than rendering teams, and game play programmers in general are less interested in high performance multithreaded programming than rendering programmers.
The second problem is of course latency, and this makes things even harder. GPU runs asynchronously from the CPU, and you should be prepared to have at least half a frame latency (on consoles, on PC 2+ frames) to get the compute results back. Writing (bug free) asynchronous code is significantly harder than writing synchronous code. Game play programmers prefer fast iteration and prototyping. Writing asynchronous GPGPU code and debugging+optimizing it before it works properly completely ruins your iteration time for game play prototyping.
I thought the CUs are supposed to be suitable for such tasksIn Bullet physics PS4 implementation non gameplay physics can be done on GPU or CPU, gameplay physics only on CPU. And they said the reason is synchronisation.
I thought the CUs are supposed to be suitable for such tasks
Oh, and another point... While we do not have any statistics about how frame rate could affect player performance in games like The Last of Us or Halo5, we do know that practically all other FPS games show that players with higher frame rates and lower input latency do have an advantage - after all, everyone played competitive Quake1 to 3 at the lowest possible detail settings (some of which also allowed for other advantages like better visibility for enemies). So I believe we can safely extrapolate that players at 30fps in both coop and competitive multiplayer would be at a disadvantage, particularly when it was designed for higher refresh rates.
I'm sorry to see that some of my posts here have come across as offensive
What are the chances of X1 being 720p again?http://www.eurogamer.net/articles/d...ars-battlefront-ps4-beta-performance-analysis
"Overall, first impressions suggest a solid turnout for the PS4 beta build. Outside of issues with matchmaking when using the partner system (as noted during a live-stream, where Eurogamer colleague Ian Higton faced an unending loading screen), the state of its visuals and frame-rate are promising. How it compares to the unseen Xbox One version will be interesting as well - something we intend to pursue once the beta launches publicly."
Fairly high.What are the chances of X1 being 720p again?
To begin with, we see the expected improvements right up front. That means a full 1080p resolution coupled with an excellent post-process anti-aliasing solution that manages to dodge in-surface aliasing while minimising flicker and blur. In addition, anisotropic filtering is utilised across the game with a variable level of quality. We see some surfaces operating with what looks similar to 16x AF while other, less important details seem to go as low as 4x. Even at its lowest level, it's still a substantial improvement over the trilinear filtering used on PlayStation 3. Image quality is simply excellent all around here.
72.0% ?100%?
You mean 72.0Percent ?72.0% ?
We all have to wait for the final product to know 100%. The xbox resolution will be announced on DF soon I imagine. But it's still beta.