Digital Foundry Article Technical Discussion Archive [2015]

Status
Not open for further replies.
New SF4 patch includes the following:

Anisotropic Filtering (AF) changes to decrease blur

Recently PCars also announced fixing the AF via an upcomming patch.


 
Last edited:
That DF article is not representative anymore, and those 22fps drops you mention happened ONLY during cutscenes. Stress test have been performed on the X1 and in the worst conditions the framerate goes down to 25 fps. And by worst I mean the swamp area and a rainy day with cloudy skies and storms.

This isn't about how low the frame rate dips, it's about how much the frame rate varies. Whether The Witcher has been patched on the XBO for higher performance is irrelevant, all it means is that the higher lows will be accompanied by higher highs and thus your frame rate variance will be the same. Obviously the latest patch implements a frame rate limiter as well so that variance will be masked but that's absolutely no different to if you apply a frame rate limiter on the PC.

Also, in the PC video you linked to, the variable frame rate in the Witcher 3 was also only apparent during cutscenes - just as you would expect when the on screen action can change wildly from scene to scene, unlike normal gameplay in that title.

With respect too. Your examples just say that you are using a hack.

How do you define "a hack"? This is a PC, it's nature it that its configurable and customisable. Running a 3rd party frame rate limiter is a perfectly acceptable way to limit your frame rate in any game on the PC, just because the concept isn't familiar to you does not mean it is of somehow less value.

The bottom line is that without a frame rate limiter applied, both consoles and PC are usually going to vary their average frame rate to similar degrees. Consoles often mask that with a frame rate limiter, by default PC's don't but they certainly have the option to if the user wishes.
 
I wish DF posted all their analysis for 2 weeks without identifying the respective platforms and allowed registered comments only and then released the platform identity and subsequently restricted new comments to people whose opinions had already been registered.

Edit: Backing your way into an answer is a great way to do math and physics but also determine which platform is superior...
 
The problem is that people draw conclusions about platform superiority based on a single game that was developed on PC and then ported to consoles.
 
I wish DF posted all their analysis for 2 weeks without identifying the respective platforms and allowed registered comments only and then released the platform identity and subsequently restricted new comments to people whose opinions had already been registered.

That would only work with HUD-less scenes though.
 
Why had I never heard of frame pacing issues before the birth of PS4 and X1?

You should have. It started in PS360 era for consoles and slightly earlier for PCs. In the PS2 era consoles were gaming machines, i.e. they did not run any code apart from game code. Therefore you could make perfectly sync 60 or 30 fps game (25 or 50 fps in case of PAL). And then consoles tripped on two things:
1. Running OS, downloads and other crap besides the game code.
2. Complex GPUs (PS2 was simple, it could draw triangles and apply one texture, you want more - draw several times), but DX9 and DX10 GPUs become too complex, they had a lot of specific fast paths for specific rendering paths, which lead to very strange performance problems.
On the other hand, the situation with GPUs is much better now: modern GPU architectures (like GCN) are very very simple, fast path is there to do one thing: draw triangles into Z-buffer, all other things are done by normal predictable ALUs, with known performance.
Running OS is still a problem, though.
 
Got myself 980Ti. And it still cannot run this game (Witcher 3) on Ultra + Hair at 1080p@60
For the skeptics: got fresh Win 8.1 image to check it out - no change, as expected.
 
Got myself 980Ti. And it still cannot run this game (Witcher 3) on Ultra + Hair at 1080p@60
For the skeptics: got fresh Win 8.1 image to check it out - no change, as expected.

Hm... my heavily OC'd 970 at 1504Mhz runs this at Ultra without the performance killer that is Ultra Flowers (dunno the settings name right now). Not locked 60Hz, but it is coming close. In-game it runs >55Hz most of the time, only in cutscenes and when you shove the camera into Geralts face does it drop to 40Hz with Hairworks enabled.

The difference in flower draw distance isn't at all worth the performance drop it causes.

At 4K, this game looks... insane. Sadly, I have to drop everything to medium and low to get anywhere near 30Hz with my PC. So 1080P it is.

EDIT: Using Fullrate Adaptive VSync is nice as well. No performance penalty and no tearing above 60Hz
 
Got myself 980Ti. And it still cannot run this game (Witcher 3) on Ultra + Hair at 1080p@60
For the skeptics: got fresh Win 8.1 image to check it out - no change, as expected.

This is why I'm holding off on buying a PC right now, I don't feel the upgrade is substantial enough just yet - even if the theoretical performance is massively different between console and a 980Ti.

I’d like to see an average framerate graph on Witcher 3 with all settings applied (including hair and not just the main character) on a 980Ti at 1080p, if you’ve the time? All of the tests I’ve seen so far always exclude the hair and show these framerates >60. Surely if you’ve a framerate in excess of 70+ frames, then you increase further settings to compensate? I guess the impact must be so great that no one ever bothers.
 
Got myself 980Ti. And it still cannot run this game (Witcher 3) on Ultra + Hair at 1080p@60
For the skeptics: got fresh Win 8.1 image to check it out - no change, as expected.
Just by curiosity, do you have a gsync monitor? If yes how is it working for you? fluctuating ~50fps with adaptive vsync compared to locked 60fps?
 
Did the era of inconsistent frame delivery not apply before the PS2?
I don't recall a frame-locked experience in Goldeneye when I grabbed a grenade launcher and decided to be a high-explosive lawn sprinkler.

Is it more that I just don't remember frame rates from decades ago on ancient tech?
We're seeing the result of highly inconsistent technology scaling. Back in the day, most things were more in line with each other, not orders of magnitude off like storage, RAM, IO, CPU, and display.

Consoles in the generations before the N64 relied on specific hardware timings to just not explode into graphical garbage, and this was a time when timings could be measured in KHz, and could be measured with an oscilloscope and not 3 million times faster like they are today.
 
I don't recall a frame-locked experience in Goldeneye when I grabbed a grenade launcher and decided to be a high-explosive lawn sprinkler.

For some reason the low framerates when you did that just seemed to make it feel more epic.
 
Game physics time tended to be tied to screen refresh in a fixed game loop, so games ran in slow motion when it got busy. Which helped with navigating the mess of bullets and explosions! Decoupled physics and render times means nowadays you just can't make out what's going on if the frame really dives.
 
This is why I'm holding off on buying a PC right now, I don't feel the upgrade is substantial enough just yet - even if the theoretical performance is massively different between console and a 980Ti.

I’d like to see an average framerate graph on Witcher 3 with all settings applied (including hair and not just the main character) on a 980Ti at 1080p, if you’ve the time? All of the tests I’ve seen so far always exclude the hair and show these framerates >60. Surely if you’ve a framerate in excess of 70+ frames, then you increase further settings to compensate? I guess the impact must be so great that no one ever bothers.

Actually, if you dive into ini files, you can for instance change the MSAA on the HairWorks from 8x to 2x, which helps a lot in performance without decreasing the detail that visibly. I haven't tried it myself, but 'no one ever bothers' is probably incorrect based on what I've read. ;)
 
Did the era of inconsistent frame delivery not apply before the PS2?
I don't recall a frame-locked experience in Goldeneye when I grabbed a grenade launcher and decided to be a high-explosive lawn sprinkler.

Is it more that I just don't remember frame rates from decades ago on ancient tech?
We're seeing the result of highly inconsistent technology scaling. Back in the day, most things were more in line with each other, not orders of magnitude off like storage, RAM, IO, CPU, and display.

Consoles in the generations before the N64 relied on specific hardware timings to just not explode into graphical garbage, and this was a time when timings could be measured in KHz, and could be measured with an oscilloscope and not 3 million times faster like they are today.
What is more choking is that you could make marvels with a 3 Gflops CPU (emotion engine) via purely heavy programming those vector processors and now when we are used to teraflops jumps with each new gpu model to the market (from 1,2 Tflop in a 7750 to 3,5 in a 970GTX to 6,2 in Titan X to the 8,5 in the rumoured Fury X) a game like TW3 can´t even run at 1080p/60 fps at ultra with hairworks on in a Titan X.
How many emotion engines between GPU models jumps are wasted for almost no effect ?.
And not talking about the holy grail of real time global illumination, the less intensive ways of doing it (VXGI?) won´t be feasible until when?. Einstein with 20-30 tflops?. And even then it will only be feasible for 20 moving objects or so...

Isn´t it time to the comeback of ad-hoc chips for things like lighting,physics... Maybe we are spending our graphics transistors budget in the less efficient way...Would love things like PowerVR RTU succeeded and we started talking in terms of millions rays per second instead of tflops.

PS: Yes, the Amiga 500 was ace.
 
Last edited:
The Amiga 'only' had a dedicated Blitter and a dedicated Audio Chip, right? Current supposedly general purpose GPUs are far more specialised than the Amiga 500. ;) I think we probably eventually should just have 3 groups of cores that are good at the main three categories of workload, and all have those on a bus in such a way that they can work together really well. And evolution of algorithms will then decide which of the three groups of cores is the most 'valuable' for what types of work. Will be interesting to see where this goes, and what will drive the next step.
 
The Amiga 'only' had a dedicated Blitter and a dedicated Audio Chip, right? Current supposedly general purpose GPUs are far more specialised than the Amiga 500. ;) I think we probably eventually should just have 3 groups of cores that are good at the main three categories of workload, and all have those on a bus in such a way that they can work together really well. And evolution of algorithms will then decide which of the three groups of cores is the most 'valuable' for what types of work. Will be interesting to see where this goes, and what will drive the next step.
being those 3 groups lighting,physics and shaders/rasterization?. (plus cpu logic of course).

PS: Amiga graphics subsystem-Agnus- was more than the famous blitter (copper?) and for the time being having something similar to a "GPU" was so ad-hoc as now would be a ray tracing chip.
 
Copper was very clever. A co-processer synchronised to the display beam so you could write code (a copper list, 4 instructions only: wait, move, skip and end) that pumped values into the custom hardware depending where the beam was.

A good sprite multiplexer routine could squeeze dozens of hardware sprites from the theoretical eight and using copper you could get more.

You could even paint the background colour (dff180) to about a four horizontal pixel precision producing a display with effectively no colours and no display memory. I loved that thing!

edit: typos.
 
Last edited by a moderator:
Copper was very clever. A co-processed synchronised to the display beam so you could write code (a copper list, 4 instructions only: wait, move, skip and end) that pimped values into the custom hardware depending where the beam was.

A good sprite multiplexer routine could squeeze dozens of hardware sprites from the theoretical eight and using copper you could get more.

You could even paint the background colour (dff180) to about a four horizontal pixel precision producing a display with effectively no colours and no display memory. I loved that thing!
Imagine something like that today, but deciding if the pixels to the screen are lighted or not ;).
 
Last edited:
Status
Not open for further replies.
Back
Top