Digital Foundry Article Technical Discussion Archive [2014]

Status
Not open for further replies.
I believe most modern televisions are actually 120hz to allow them to display 24hz/fps content and 60hz content without judder. The difference is that they only have to accept 24hz content they're not trying to display 120hz content at 120hz. The motion interpolation stuff is just a fairly crude copy frame + filter operation though, it's effectiveness is variable with sports content apparently fairing best IIRC from the last time I was on the AVSforums salivating over tvs.
 
I believe most modern televisions are actually 120hz to allow them to display 24hz/fps content and 60hz content without judder. The difference is that they only have to accept 24hz content they're not trying to display 120hz content at 120hz. The motion interpolation stuff is just a fairly crude copy frame + filter operation though, it's effectiveness is variable with sports content apparently fairing best IIRC from the last time I was on the AVSforums salivating over tvs.
Mine displays 24p content at 72fps. But then mine is also not completely "modern" anymore. I think I bought it in 2005. Aah, Pioneer Kuro, how I love thee.. :)
 
I believe most modern televisions are actually 120hz to allow them to display 24hz/fps content and 60hz content without judder. The difference is that they only have to accept 24hz content they're not trying to display 120hz content at 120hz. The motion interpolation stuff is just a fairly crude copy frame + filter operation though, it's effectiveness is variable with sports content apparently fairing best IIRC from the last time I was on the AVSforums salivating over tvs.
Right. Most 120Hz/240Hz etc., TVs do in fact refresh at those rates, but they don't actually accept 120Hz signals. The purpose is to allow film to play at its native framerate by displaying each frame 5x, 10x etc., per second and avoiding 3:2 pulldown which causes judder. People falsely link frame interpolation to 120Hz, but they really are not the same thing.

Mine displays 24p content at 72fps. But then mine is also not completely "modern" anymore. I think I bought it in 2005. Aah, Pioneer Kuro, how I love thee.. :)
Plasmas generally use 48Hz, 72Hz or 96Hz for the same purpose.
 
Mine displays 24p content at 72fps. But then mine is also not completely "modern" anymore. I think I bought it in 2005. Aah, Pioneer Kuro, how I love thee.. :)

That's perfectly fine. It's a problem when the screen refreshed at a frequency that is not a multiple of 24, like most older and not high end screens, that creates judder during 24p viewing. Even today there's a lot of TVs that cannot display 24p.
The Kuro is still an amazing screen. It's why I got my Panny plasma. After they bought the tech, they were the closest thing you could get to a Kuro.
 
Right. Most 120Hz/240Hz etc., TVs do in fact refresh at those rates, but they don't actually accept 120Hz signals. The purpose is to allow film to play at its native framerate by displaying each frame 5x, 10x etc., per second and avoiding 3:2 pulldown which causes judder. People falsely link frame interpolation to 120Hz, but they really are not the same thing.


Plasmas generally use 48Hz, 72Hz or 96Hz for the same purpose.

FWIW, my plasma TV said 600Hz in the box.

Of course, it was bought in 2009 so I'm damn sure it wont get 120Hz through HDMI.
 
I'm putting an end to the 'price of getting a comparable PC' discussion as that's OT and was never the subject when it was raised. The DF article is comparing a cheap GPU for those who can upgrade, and the low cost of capable parts was raised, in contrast to the tradition of consoles having cutting-edge components requiring an expensive PC GPU to match.

Whether people are in a position to upgrade or not (average state of home PC) might be worth discussing, but not in this thread. The price of PC hardware already has a discussion I think, whcih can be revisited if people want to trade price lists again.
 
And imho the next round of games is going to be interesting, wont we see games with textures that suddenly needs to be lower res on the PC because of the limited Ram on the graphics card, then what? The PS4 has 8GB of GDDR ram, even the most expensive Graphic cards are running short on that one.

You don't need 8gb of gddr5 for games. The consoles reserve a very large chunk of that for the os, and then game code itself can run in regular ddr3 memory just fine, it doesn't need to be run in gddr5. ~4gb of gddr5 should be enough memory for video related tasks with ddr3 filling in the rest.
 
I don't recall anyone saying the 260 performed better.

My point was that even taking those 30fps locked games into the comparison was disingenous when they could unlock the NFS PC game for instance but not the next gen game.
The framerate are not comparable because we don't know the real framerate on console!

First of all, it's a console game so there's no such thing as CPU or GPU bound. The CPU may be whats's providing the hard limit on performance but the GPU will still be pushed to it's full potential by the simple additions of graphical effects - or even more basic; resolution.
Of course but if you test with a CPU bound game and use a CPU twice or more powerful than on console, you don't test only the GPU (as that was the point of the article) but a combination of CPU + GPU. If they had underclocked the AMD hexacore to 1.6Ghz then yes it would have been fair. It wasn't fair. Why didn't they underclocked the CPU used? They did underclock GPU once when they wanted to predict future framerate gap between PS4 and X1 in a previous article.

So this isn't a simple matter of ignoring the GPU and focusing on the CPU. The GPU in the PS4 is clearly be pushed to it's limits to achieve the sub 1080p resolution or it would run higher (or match the PC's high graphical effects).
On BF4 for consoles, the main bottleneck is on CPU, devs implied it several times. Ergo one can't use at all this game to compare GPUs.

And just for the record, I don't see the value in claiming the 260 is a 2TF GPU while ignoring it's other specs. It's as if you're trying to show that even with more power the PC GPU can only just keep up whereas in reality, the PC GPU while (almost) 2TF actually only has 70% of the PS4's fill rate and 60% of it's memory bandwidth so getting as close as it does it a big achievement.
It's either 1.97TF or 2TF. But they said 1.9TF in the article why? Who round down numbers like that? Don't forget the bandwidth allocated for the CPU on consoles too...

Umm, this is flat out wrong. I'm not sure where you're getting those numbers from but the 260 has only 38% of the X1's bandwidth when you include the eSRAM which you obviously must.
And you can't take PR bandwidth Microsoft numbers. Esram has a 109GB/s peak bandwidth which is roughly what the GPU card has. I very much doubt COD devs will have optimized it to, in certain rare cases, reach the ideal 140GB/s (only used in some specific stuff). And I doubt they will combine esram and main ram bandwidth in the game. Just looking at the difference 720p versus 1080p on PS4 strongly shows a lack of optimization for the X1 architecture. We can roughly see the X1 as a max 109GB/s bandwidth GPU machine with COD and for this comparison.

And what a bizarre conclusion that would be given the available evidence.
Well, many things are a matter of interpretation. I used my fuzzy logic differently than DF or you.
 
My point was that even taking those 30fps locked games into the comparison was disingenous when they could unlock the NFS PC game for instance but not the next gen game.
The framerate are not comparable because we don't know the real framerate on console!

I wouldn't say it's disingenous. There are very few examples of cross gen games to use and those games at the very least tell us that the 260 is in the same ball park as the console GPU's. Maybe the average framerate would have been a little higher on the consoles - but not so high as to either lock the frame rate at 60fps or up the graphical settings beyond what the 260 can cope with. So it's reasonable to assume they're performing similarly.

Of course but if you test with a CPU bound game and use a CPU twice or more powerful than on console, you don't test only the GPU (as that was the point of the article) but a combination of CPU + GPU. If they had underclocked the AMD hexacore to 1.6Ghz then yes it would have been fair. It wasn't fair. Why didn't they underclocked the CPU used? They did underclock GPU once when they wanted to predict future framerate gap between PS4 and X1 in a previous article.

On BF4 for consoles, the main bottleneck is on CPU, devs implied it several times. Ergo one can't use at all this game to compare GPUs.

But you're seeing the limits of the GPU specifically in BF4 on PS4. Resolution has no impact on CPU performance so if the game was truly only limited by CPU performance and had GPU performance to spare as youre suggesting then why's it running at 900p rather than 1080p?

Clearly, that's a GPU performance limit and it's a level of performance the 260 can match.

It's either 1.97TF or 2TF. But they said 1.9TF in the article why? Who round down numbers like that? Don't forget the bandwidth allocated for the CPU on consoles too...

I think you're reading too much into it. They're not trying to misrepresent (underplay) the power of the GPU, if they were they would need look no further than it's memory bandwidth.

And yes, the consoles will allocate some bandwidth to the CPU but even then they still have a very healthy lead in overall bandwidth over the 260x.

And you can't take PR bandwidth Microsoft numbers. Esram has a 109GB/s peak bandwidth which is roughly what the GPU card has. I very much doubt COD devs will have optimized it to, in certain rare cases, reach the ideal 140GB/s (only used in some specific stuff).

No it's more like 209GB/s peak bandwidth since it can read and write simultaneously. So that's already double the 260 plus the 68GB/s of main memory bandiwdth.

Bottom line is that the X1 has far more bandwidth available to it than the PC GPU. Claims that "it won't be used because the devs won't have optimised for the esram" are pure baseless speculation. It's far more likely that the devs did indeed optimise the game to make as much use of the esram as possible, because you know, it's like, their job.

And I doubt they will combine esram and main ram bandwidth in the game.

Umm, why? That's how the consoles designed to be used.

Just looking at the difference 720p versus 1080p on PS4 strongly shows a lack of optimization for the X1 architecture. We can roughly see the X1 as a max 109GB/s bandwidth GPU machine with COD and for this comparison.

This is a ridiculous conclusion to draw from the available evidence. How about the almost double fill rate of the PS4? Of the ~40% higher shader and TMU throughput? The X1 is not at a bandwidth disadvantage compared with the PS4 and a strong argument can be made that it's at an advantage.
 
Changing the subject, are there any more advanaced methods of post processing AA that could see the light of day now the new hardware is out? Or is SMAA, MLAA and FXAA pretty much it?
 
There are and will be. Ryse is using an improved postFX AA to very good result IMO, which reportedly uses some temporal analysis IIRC (bit unsure on that though).
 
There are and will be. Ryse is using an improved postFX AA to very good result IMO, which reportedly uses some temporal analysis IIRC (bit unsure on that though).

Killzone SF also uses temporal analysis and uses information from previous frames for the AA.
 
Changing the subject, are there any more advanaced methods of post processing AA that could see the light of day now the new hardware is out? Or is SMAA, MLAA and FXAA pretty much it?

The best and cheaper Post effect Antialiasing IMO is Ryse's AA: SMAA 1tx. It's SMAA with a temporal component. Killzone SF also has a temporal component like Hesido said.

But the difference is that (apart from the identical temporal stuff which deals mainly with the temporal aliasing) KZ uses a blurring algorithm (FXAA will indistinctly blur any contrast between sub-pixels) when Ryse uses the so much superior morphological algorithm SMAA which only (tries to) look(s) for aliased edges and tries to reconstruct them to match an ideal 8xSSAA.

The differences between KZ and Ryse are striking: KZ is cartoony when Ryse is hyper realistic and sharp with a real "next gen feeling". At least when you are looking at only 1 frame/image.
 
Last edited by a moderator:
How the hell is the Xbox One getting a 6fps (24fps low to 30fps high) variable while the PS4's getting a 27fps (33fps low to 60fps high) variable?

BTW, where is the 45fps peak that was mentioned for Xbox One? :oops:
They probably decided to cap the X1 version because it was closer to 30fps overall anyway.

The stats for the PS4 version are a bit mis-leading... it only dipped to 33fps at one point for a few seconds. Gameplay is mostly ~40-50fps during intense scenes, and ~60 when not much is happening.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top