Digital Foundry Article Technical Discussion Archive [2015]

Status
Not open for further replies.
Again if you are selectively choosing the worst examples you arent contributing to the discussion. Lair is that example. You mentioned it twice. But even if I accept your Lair example you know perfectly well that talking about past gen's Lair and this gen's New n' Tasty is a silly comparison, The former was experiencing huge development pressure, and was indeed trying to do a lot more for its time than New n' Tasty on thie PS4, such as huge massive environments with massive battles. It faced more challenges and difficulties. Back then we also had a lot more superb examples to compare to, so we know that those worse examples you seelctively pick were the exception of the rule. Regardless if you want to discuss with me you should leave this douche attitude.

Here is my original post


Discuss with me in the original context. Give attention to these two points 1) I did mention the possibility of difficult optimization 2) Remasters dont push the envelope.
It was food for thought. I expressed my pondering. If you want to share with others what you think may be the issue there are better ways to do it than with this attitude
If asking focused questions is your definition of "attitude", I guess you would be correct. You seem to want to stick with Lair, after I mentioned "all the Madden games" of that time (first 3 years?). I said "etc", but the truth is that is was a LOT of the games within the first 3 years of the PS3's lifespan. Gloss over that, if you wish. It does not keep these things from being true.

PS4 remasters, from the PS3, should push the CPU. When taking full advantage of the Cell processor (especially in a 30fps game), it should be quite difficult to complete the same work in half the time. I believe, this is quite common knowledge. To pull that off is an amazing feat. That's why even Naughty Dog had some trouble with it.

It seems you are trying to lean heavy on the hardware not being up to snuff. At less than two years into this gen, that seems like, very much, a flawed argument. Feel free to address any number of last gen games (around this time index) that had performance issues (there were MANY on PS3). Tell me, what makes them so different?

You say you were just pondering, but it seemed/seems like a condemnation of the hardware.
 
PS4 remasters, from the PS3, should push the CPU. When taking full advantage of the Cell processor (especially in a 30fps game), it should be quite difficult to complete the same work in half the time. I believe, this is quite common knowledge. To pull that off is an amazing feat. That's why even Naughty Dog had some trouble with it.

The jump between PS2 CPU and PS3 CPU is massive, but it's not the same case for PS3 -> PS4. Surely the GPU jump is big, and Sony might have bet on the GPU compute. But it doesn't change the fact that they went with a comparably weaker CPU.

It's obvious PS3 pushed the envelope with the best available hardware at the time (bluray, memory, cell, minus the rsx), but PS4 didn't, hence they are not losing money with the hardware (or very little) compared to the PS3.
 
The jump between PS2 CPU and PS3 CPU is massive, but it's not the same case for PS3 -> PS4. Surely the GPU jump is big, and Sony might have bet on the GPU compute. But it doesn't change the fact that they went with a comparably weaker CPU.

It's obvious PS3 pushed the envelope with the best available hardware at the time (bluray, memory, cell, minus the rsx), but PS4 didn't, hence they are not losing money with the hardware (or very little) compared to the PS3.
it is just how you see the CPU of the PS3. The general compute part of the Cell was ... well descent. Cell was mainly used to support the GPU. When you look at work that should be done by the cpu, the CPU part has been greatly improved with the jaguar cpu. but it just depends on how you look at the cpu. E.g. branching is something Cells SPEs are not good at, so you had to write your code with that in mind. The SPEs are highly specialized cores. If you have optimal code for them, they are great. If not... well a more general purpose cpu like the jaguar is just better at that.

The really good thing for developers is, x86 is really well known. Cell is just a to special thing.
 
I also think you need to see the system as a whole. The new CPU part of the PS4 basically intended to just replace the PPU part, and the GPU part intended to replace the SPEs+GPU. That said, it is not yet clear that CUs are as easy or as efficient to program and integrate into your workflow as the SPE parts were. I assume they will be by virtue of CUs being present in a lot of desktop GPUs and therefore better known and more widely used than SPEs, but SPEs may be better at some things than CUs are as they are perhaps closer to CPUs in terms of their independence, ability to run 'normal' C code, etc., and CUs could still be more complicated to integrate with CPU work?

Certainly though I'm getting the impression that PS4 dedicated titles already do more with CUs than most PC games do, but I'd love to see some papers on them being used really well. We also have this topic, by the way, where I dropped in an article about Async Compute in Tomorrow's Children that I recently came across:

https://forum.beyond3d.com/threads/asynchronous-compute-what-are-the-benefits.54891/page-15
 
If asking focused questions is your definition of "attitude", I guess you would be correct. You seem to want to stick with Lair, after I mentioned "all the Madden games" of that time (first 3 years?). I said "etc", but the truth is that is was a LOT of the games within the first 3 years of the PS3's lifespan. Gloss over that, if you wish. It does not keep these things from being true.

PS4 remasters, from the PS3, should push the CPU. When taking full advantage of the Cell processor (especially in a 30fps game), it should be quite difficult to complete the same work in half the time. I believe, this is quite common knowledge. To pull that off is an amazing feat. That's why even Naughty Dog had some trouble with it.

It seems you are trying to lean heavy on the hardware not being up to snuff. At less than two years into this gen, that seems like, very much, a flawed argument. Feel free to address any number of last gen games (around this time index) that had performance issues (there were MANY on PS3). Tell me, what makes them so different?

You say you were just pondering, but it seemed/seems like a condemnation of the hardware.
Its not you asking questions my definition of "attitude". You know very well what I mean.
You tend to make too many assumptions about what you think I may be trying to convice and ignore continuously things I said.
Let me quote my self AGAIN (I am specifically pointing to bolded areas)
Back then we also had a lot more superb examples to compare to, so we know that those worse examples you seelctively pick were the exception of the rule.
1) I did mention the possibility of difficult optimization
These two quotes should have made your above reply pointless because it matters only under the assumption that they have never been said.

Regarding the PS3 vs what we have this gen there is HUGE FREAKIN DIFFERENCE. That console was a pain in the ass to program for and squeeze out performance. The GPU was weak, the Cell although strong was like solving a rubik's cube (as quoted by Cerny), it had two memory pools and a much larger memory footprint reserved for the OS the first years compared to what was available later. Developers were reporting struggle. We dont get much of these complaints this gen. The PS3 did have a very disastrous start and it showed on price, performance and sales. Once more your PS3 example, much like Lair, is not a good example because its not a like for like example with either the XB1 or the PS4. We knew what was wrong back then with the PS3.

If you take the 360 on the other hand those "etc" examples you didnt name are even greatly reduced. We thought of the ugly examples as efforts that didnt get the appropriate care because at the same time we had too many games that were visually more stunning that were outperforming them. I remember back then people talking about some games (I think it was madden?) and many agreed that the developer from a business perspective didnt need to struggle too much with optimization. The game would sell anyways.

This gen the PS4 is the complete opposite of the PS3 in almost all areas. The whole focus was making it as developer friendly as possible.
 
Last edited:
I also think you need to see the system as a whole. The new CPU part of the PS4 basically intended to just replace the PPU part, and the GPU part intended to replace the SPEs+GPU. That said, it is not yet clear that CUs are as easy or as efficient to program and integrate into your workflow as the SPE parts were.

I imagine a lot of stuff that was done on the SPE's doesn't even need CU's in a modern GPU. i.e. they are just functions that are standard in a newer GPU and don't require any special compute intervention. One obvious example would be vertex shading which I understand the SPE's helped out with a lot. Obviously with todays massive unified shader arrays that's something that wouldn't require any special conversion from SPE code to GPGPU code (even though the work is still done in the CU's of course).

Certainly though I'm getting the impression that PS4 dedicated titles already do more with CUs than most PC games do,

Considering no PC API aside from Mantle even supports async compute at present I'd say that's a no brainer. Also aside from the latest Maxwell GPU's from Nvidia, neither Intel or Nvidia actually support async compute in their GPU's anyway so developers have little motivation to add that capability at present even if the API's did support it. Of course it can also be argued that Nvidia and Intel would benefit less from async compute in the first place (or put in a more negative way towards AMD, AMD requires async compute to maximise the use of it's shader array unlike Nvidia and Intel). Of course that does mean there should be some spare performance left on the table with GCN based GPU's on the PC so it will be interesting to see how much of a boost GCN parts get in the PC space in relation to Kepler and Maxwell 1 in games that make heavy use of async compute.
 
Digital Foundry: AMD reveals HBM: the future of graphics RAM technology

Nvidia's Prototype
jpg

However, it's worth stressing that HBM is not a wholly proprietary technology. The concept of stacked memory modules is hardly unique thinking: AMD says it has been working on the tech for seven years, but the principles behind it are hardly a secret and Nvidia has already shown off a test vehicle showcasing its own version (pictured above). It's a part of its next-gen Pascal architecture that's due to arrive in 2016. But it's AMD that will be first to market with stacked memory and we can't wait to put it through its paces.
 
PS4 version does appear to run with a capped 30fps, giving a more consistent update than its Xbox One counterpart. We can also confirm native 1080p resolution throughout. We'll have head-to-head performance tests online as soon as possible.

I'm guessing the uncapped XB1 edition, has more to do with allowing the game more legroom to breath (perform). If capped, XB1 edition may see terrible drops approaching the low 20s. But still, we're going to hear some PS4 users complain about not having the option of doing so...
 
Why would capping the maximum affect the lower bound if they're already triple buffered/v-sync?
Interesting question. I have read once on a DF article (or post) that the uncapped Infamous games had strangely better lower bounds than the capped game like sometimes the capped games had sub-30fps drops when the uncapped game stayed (slightly) above 30fps.

It's easily displayed in this video showing SS and first light in the same area. The beginning is the uncapped game then the capped game. You can see that the uncapped First light game never goes under 33fps (it dips once at 33fps, the rest is mostly ~40fps) but strangely, when capped in the same area, the game regularly dips at 28 or 29fps.

 
It's easily displayed in this video showing SS and first light in the same area. The beginning is the uncapped game then the capped game. You can see that the uncapped First light game never goes under 33fps (it dips once at 33fps, the rest is mostly ~40fps) but strangely, when capped in the same area, the game regularly dips at 28 or 29fps.

I thought this was generally chalked up to improvements in the engine between SS and FL?

29bcc96cb59a59919f68e4114ad34abe39cb44a3b8afa9ce31c278aae7254ca2.jpg
 
Frame pacing issue?

I thought this was generally chalked up to improvements in the engine between SS and FL?

29bcc96cb59a59919f68e4114ad34abe39cb44a3b8afa9ce31c278aae7254ca2.jpg

It's comparing the same spots in both builds, just that the first half of the video is without the fps limiter, the latter half with it on. :p
 
It's comparing the same spots in both builds, just that the first half of the video is without the fps limiter, the latter half with it on. :p
I don't have time to read things, I'm busy searching the internet for Spock pictures! :oops:
 
Interesting question. I have read once on a DF article (or post) that the uncapped Infamous games had strangely better lower bounds than the capped game like sometimes the capped games had sub-30fps drops when the uncapped game stayed (slightly) above 30fps.

It's easily displayed in this video showing SS and first light in the same area. The beginning is the uncapped game then the capped game. You can see that the uncapped First light game never goes under 33fps (it dips once at 33fps, the rest is mostly ~40fps) but strangely, when capped in the same area, the game regularly dips at 28 or 29fps.
There's nothing strange about it. Look at the frame time graph in the video; even when it's "consistently above 30fps" (as a fairly lengthy time-average of performance), you regularly have individual frames displayed for more than 33ms. Those seem to occur with similar frequency in the capped and uncapped videos.

Sometimes the game just takes a while to process a frame.
 
There's nothing strange about it. Look at the frame time graph in the video; even when it's "consistently above 30fps" (as a fairly lengthy time-average of performance), you regularly have individual frames displayed for more than 33ms. Those seem to occur with similar frequency in the capped and uncapped videos.

Sometimes the game just takes a while to process a frame.

You are definitely right. Now If we look at the patched Witcher 3 XB1 DF framerate video and focus on the frame time graph only, during gameplay, the game has regularly its frame time above 33ms meaning the game would similarly dips under 30fps if it was capped at 30fps...:rolleyes:

Fortunately for the XB1 game, the engine is uncapped. The PS4 game on the other hand...:runaway:
 
Last edited:
http://www.neogaf.com/forum/showpost.php?p=164444919&postcount=2741

hmm...

Durante said:
The in-game FPS cap is not as bad as some other implementations, but also not as good as external tools. After some rather extensive testing (I'm writing an article) I suggest using no in-game framecap, no in-game Vsync, borderless fullscreen and a 30 FPS limit enforced using RTSS. Best combination of consistent performance and low latency I have found.

(Also, this game is pretty neat)
 
They could have capped the game at 30 fps and kept shadow quality where it was, and also added a real time version of Space Invaders to play during CGI loading screens.

Everyone wins. Except alien invaders.
 
the game has regularly its frame time above 33ms meaning the game would similarly dips under 30fps if it was capped at 30fps...:rolleyes:

Fortunately for the XB1 game, the engine is uncapped.
Ehhh

Judging by the comparison video, a decently-implemented cap would make things feel much stabler than they currently are. Some scenes would be rock solid and others would have a blip every couple seconds. Versus the total mess we're seeing right now.
 
Status
Not open for further replies.
Back
Top