Rift, Vive, and Virtual Reality

To be perfectly clear...contrary to Epic's hype machine and grandiose PR. UE4 in its current form is sh@t for high performance VR simply because it's a deferred renderer which isn't suited for aggressive use of MSAA among other issues which hinder its performance in intensive VR scenarios. That's why Unity 5.x (forward renderer) has had the upper hand in tems of IQ and performance right now (and Valve's own Source 2 which The Lab is running on alongside Unity 5.4). People have been using the super-sampling hack/ini editing (renderTargetMultiplier) to improve IQ in UE4 powered VR games/app (which is far from being a reasonable solution..). To elevate some of the performance issues in UE4 Nivida implemented their proprietary Multi-Res Shading which obviously only works on Geforce GPUs (it is used in the latest update of Pool Nation VR and Raw Data). As far as I am concerned UE can be called an Nvidia sponsored engine anyway so don't expect miracles on AMD HW.

As a matter of fact Oculus had to write their own experimental forward rendering path for their own UE4 based demos: https://developer.oculus.com/blog/introducing-the-oculus-unreal-renderer/ (GitHub source).
Valve's modified Unity 5.4 Forwared renderer used in the Lab: https://www.assetstore.unity3d.com/en/#!/content/63141

HardOCP's benchs are to be taken with a mountain of salt...Just for the heck of it I just played through the Robot Repair demo on a Fury X and contrary (and sadly unsurprisingly..) to Mr Bennet, I recorded 0 (ZERO) frame drops during the whole demo and no instance of re-projection (frames drops & repro only happen on The Lab's loading screen) ...go figure..

EDIT: Oh, just found out that Kyle's still the scumbag...Adaptive Resolution was broken in the SteamVR build he used...and he doesn't want to redo the test because he "assumes" that the end results would be the same (Nvidia better than AMD):
https://hardforum.com/threads/amd-n...robot-repair-h.1908149/page-2#post-1042510695
Bug was fixed in the SteamVR update 3 days ago: http://steamcommunity.com/gid/103582791435040972/announcements/detail/608371609732683821

When asked earlier if he bothered running the SteamVR Performance Test bench on the FuryX to check if something was wrong because his results where fishy he replied the following:
https://hardforum.com/threads/amd-n...alves-robot-repair-h.1908149/#post-1042483402

So yeah...can we please stop linking to his trash from now on?
 
Last edited:
There's no doubt that the UE4 renderer seems to be a clunky path to take right now, but like it or not that's the reality we have to deal with for content that's been completed under it and any further content that's done before Epic packages in a forward renderer. Considering the premium price of VR hardware right now, it seems to me very important to not sugar coat the reality of user experience when it comes to performance, bugs, reliability, etc - whether that be caused by the HMD software stack, client application, or graphics hardware+drivers, as all of those things are required to be working in concert to deliver an acceptable experience. That's the reality that VR users have to face every day, and something that would-be VR users need to be aware of before they enter this market.

In terms of Kyle's reviews - I find it kind of refreshing to be honest. For the latter half of 2015 we had reddit and every tech site message board flooded by mobs of people who didn't even own VR headsets making their case for Fury X over 980ti solely based on the promises of marketing slide rhetoric, despite what actual hands-on performance was showing. The VR community is relatively tiny still and if the community are not on their collective toes about how things are actually functioning, and we don't have mouths like Kyle able to get attention to it, then we're going to be stuck with a lot of half-cooked software, under reported bugs, and unhealthily effective PR.
 
Last edited:
In terms of Kyle's reviews - I find it kind of refreshing to be honest. For the latter half of 2015 we had reddit and every tech site message board flooded by mobs of people who didn't even own VR headsets making their case for Fury X over 980ti solely based on the promises of marketing slide rhetoric, despite what actual hands-on performance was showing. The VR community is relatively tiny still and if the community are not on their collective toes about how things are actually functioning, and we don't have mouths like Kyle able to get attention to it, then we're going to be stuck with a lot of half-cooked software, under reported bugs, and unhealthily effective PR.

Does it really help the situation, however, if some of the information that he spreads is incorrect, whether deliberately or though sheer laziness, as long as it fits his view of how things should be? I'm quite certain he'd redo many of those tests if a user brought up that the Nvidia results were lower than they should be and/or if the application had a bug that had since been fixed that may have negatively impacted the performance of Nvidia hardware.

That's like saying, it's OK to spread misinformation about AMD hardware because developers aren't coding specifically to AMDs strengths. You can show that just as well by doing things in a rigorous and ethical manner even if the difference would be potentially less or non-existent in certain of the applications he tested.

Either way, applauding someone for not being thorough as long as the results fit the message they wish to convey when their words reach and influence a large segment of the enthusiast community seems rather counterproductive.

Regards,
SB
 
Kyle has been a joke for years. He was the one site defending the 5800 ultra when it hit and he defended it through its life cycle. I've stayed away since then and the site has never gotten better
 
Does it really help the situation, however, if some of the information that he spreads is incorrect, whether deliberately or though sheer laziness, as long as it fits his view of how things should be?

My understanding is that it wasn't incorrect at the time he did the benchmarks, that it is a good reflection of the frame times of the respective cards, and is still a much needed commentary on how big the impact is of the SDK. Ideally I would say that Kyle should be continually testing all VR content over the course of months as most VR titles are early access and going through a constant process of optimization (as is pretty well every other relevant part of the system), but there are limits to what I can expect of any hardware site for such a tiny audience.

That's like saying, it's OK to spread misinformation about AMD hardware because developers aren't coding specifically to AMDs strengths. You can show that just as well by doing things in a rigorous and ethical manner even if the difference would be potentially less or non-existent in certain of the applications he tested.

It's not misinformation though if that's how it actually performed at the time of the test. This isn't the GPU architecture Olympics where the IHVs are producing the most capable silicon their engineers and fabs can manage and winners and losers are chosen based on some synthetic metric. Graphics is as much about software as it is hardware, and with VR even more so as it needs every optimization it can get, and by whatever means necessary to get it implemented. Those optimizations are going to be changing year over year, and the degree to which they're actually implemented and fully exploited is going to vary over time, between vendors, specific GPU architectures, game engines, and particular build versions of game engines. True apples to apples doesn't exist and it's probably going to get far worse in the coming few years, so we're going to have to accept some degree of subjectivity and slightly fuzzy evaluations. If there were other sites doing these sorts of game by game performance evaluations, then I would probably just as well choose another site as I tend to do for other computer hardware, but beggars can't be choosers. If someone is planning on buying into VR and needing a GPU upgrade, they're going to come away with far more grounded expectations of GPU performance by reading Kyle's articles than if they were to simply accept the minimum hardware requirements and pick their favorite brand.
 
It's not misinformation though if that's how it actually performed at the time of the test.

This "may" be how all cards performed at the time when the software he was testing was suffering from a bug that randomly impacted performance on the GPUs he was testing (and the same software also apparently had another AMD related bug which was fixed in the next release "Updated version handling of AMD drivers"). So the validity of his tests are equal to 0. This is pure misinformation and his replies in the comments to people questioning the results is all the proof needed. The bug appeared in the SteamVR build released on July 27th and was fixed in the build released on August 29th so everything running through SteamVR (and which used Adaptive Quality, because not all software uses it) was potentially broken which means that all of his stuff has to be redone given that he is apparently using SteamVR's performance logger to "benchmark" everything. He is full of it and shouldn't be trusted at all.

Anyway let's go back on track and discuss valuable subjects instead.

Let me reiterate that everything using UE4 for VR (besides Oculus's own demos using the modified renderer) is simply horrible compared to Unity and Source 2. Trials On Tatooine is simply broken on AMD GPU's. Performance is horrendous while IQ is the worst of any VR experience out there (the ILM geniuses are using UE4's TAA which is a big no no in VR) same for Ikea's kitchen crap..full blown TAA so everything is muddy and ghosting everywhere thanks to the temporal nature of the AA. NVIDIA's Multi-Res shading fixes the performance (good for benchmarks!) but the IQ is still shit (actually it's technically worse given that the outer parts of the FOV are now rendered at a lower res..). UE4 using the default renderer is simply not suited for VR. So in addition to benchmarks people should also call Epic (and the devs using it) out about the horrible VR experience provided by their products. It is unacceptable compared to what Valve did with the Lab (Source 2, Unity), Destinations (Source 2). WeVR with TheBlu (Unity) etc..
 
Last edited:
Is TAA really that much worse than MSAA or have we simply not seen an implementation that works well? TSSAA in Doom for example seems like it might be superior. I'm not sure if they have been using it for their VR demo though. Their implementation seems like it would be very close to what ATW is doing in current engines with minimal cost. In fact it may actually be substantially faster than MSAA. That seems to be the leading technique used for Doom and they've mentioned cooperation within Bethesda for Fallout 4. They at least mentioned on consoles the technique works for adaptive resolution scaling as well as having minimal costs including the elimination of MSAA penalties. It seems like their technique might provide a more accurate "guess" at what the current frame should be. If the performance boost puts your framerate in excess of the refresh of the device the temporal part should still work as you couldn't change directions. The big cards are pushing Doom at well over 100fps so it seems plausible it could be adapted for VR.
 
Is TAA really that much worse than MSAA or have we simply not seen an implementation that works well? TSSAA in Doom for example seems like it might be superior. I'm not sure if they have been using it for their VR demo though. Their implementation seems like it would be very close to what ATW is doing in current engines with minimal cost. In fact it may actually be substantially faster than MSAA. That seems to be the leading technique used for Doom and they've mentioned cooperation within Bethesda for Fallout 4. They at least mentioned on consoles the technique works for adaptive resolution scaling as well as having minimal costs including the elimination of MSAA penalties. It seems like their technique might provide a more accurate "guess" at what the current frame should be. If the performance boost puts your framerate in excess of the refresh of the device the temporal part should still work as you couldn't change directions. The big cards are pushing Doom at well over 100fps so it seems plausible it could be adapted for VR.
Any temporal technique (using data from previous frames) is a no-go in VR. It's as simple as that. Temporal AA, Temporal Reconstruction, Most Screen-Space effects etc...should simply not be used in VR. to Quote Oculus:

We also wanted to compare hardware accelerated multi-sample anti-aliasing (MSAA) with Unreal’s temporal antialiasing (TAA). TAA works extremely well in monitor-only rendering and is a very good match for deferred rendering, but it causes noticeable artifacts in VR. In particular, it can cause judder and geometric aliasing during head motion. To be clear, this was made worse by some of our own shader and vertex animation tricks. But it’s mostly due to the way VR headsets function.
Currently the only real solutions for good AA in VR is MSAA which is why forward renders like Source 2 and Unity 5.4 are a better choice (or UE4 with Oculus's experimental renderer), SSAA (but this is simply to costly). So Valve implemented Adaptive Resolution + MSAA which is currently the best solution.
 
SSAA (but this is simply to costly)
Point being the method used for Doom had negligible(~1%) impact on performance. If used primarily to handle shader aliasing while accounting for the actual ATW it seems like it would work. My thinking it using the past frame as a sort of stereo image with which to get a better approximation of edges. Not necessarily blend it into your final scene. With past kits you'd avoid it because you would be feeding one temporal effect into another for the ATW. For a more modern kit it might be plausible. They also had a hybrid forward/deferred renderer that might address the concerns. I agree with what you're saying here, but just because the techniques don't currently work doesn't mean they can't in the future or with improvements to the kits.
 
Point being the method used for Doom had negligible(~1%) impact on performance. If used primarily to handle shader aliasing while accounting for the actual ATW it seems like it would work. My thinking it using the past frame as a sort of stereo image with which to get a better approximation of edges. Not necessarily blend it into your final scene. With past kits you'd avoid it because you would be feeding one temporal effect into another for the ATW. For a more modern kit it might be plausible. They also had a hybrid forward/deferred renderer that might address the concerns. I agree with what you're saying here, but just because the techniques don't currently work doesn't mean they can't in the future or with improvements to the kits.
I definitely agree with you here. "Brute-Force" MSAA is simply the first step until more elegant and advanced techniques are created to cater to VR.

EDIT: Oculus apparently just took down their experimental forward renderer for UE4 a few hours ago (!). It was up when I posted the link last night but now returns a 404 Page and their repository on GitHub is now empty https://github.com/Oculus-VR (wtf)
 
Last edited:
The bug appeared in the SteamVR build released on July 27th and was fixed in the build released on August 29th so everything running through SteamVR (and which used Adaptive Quality, because not all software uses it) was potentially broken which means that all of his stuff has to be redone given that he is apparently using SteamVR's performance logger to "benchmark" everything. He is full of it and shouldn't be trusted at all.
Highly unlikely, a bug of this scale would have prompted AMD to issue a statement outlining the details of the bug and how to work around it, one of the games tested had massive issues about which AMD employees (Raja included) contacted HOCP and released a driver to fix it, that didn't happen for the other apps. So No. AMD is the one to blame here, not SteamVR. Anyhow HOCP will continue to test other apps down the line, when we see AMD performance not changing, that's when we will know for sure that the cause is anything but bugs.
 
Highly unlikely, a bug of this scale would have prompted AMD to issue a statement outlining the details of the bug and how to work around it, one of the games tested had massive issues about which AMD employees (Raja included) contacted HOCP and released a driver to fix it, that didn't happen for the other apps. So No. AMD is the one to blame here, not SteamVR. Anyhow HOCP will continue to test other apps down the line, when we see AMD performance not changing, that's when we will know for sure that the cause is anything but bugs.

You can bet your bottom dollar that if there was a bug for Nvidia Kyle would have stayed up for hours / days / weeks/ months waiting for NVidia to send it to him and then rush to post the results .

Since its AMD I am sure we will get some story on how AMD begged him to come to their event and even get him a hotel room but he wouldn't go because the new cards will be bad and he can't be bought. Honestly just go back and look at all the BS involving his hatred of AMD
 
Highly unlikely, a bug of this scale would have prompted AMD to issue a statement ....
What is exactly highly unlikely? As of today this is false/a lie. Any one with a Fury X can run the Robot Repair experience with all the SteamVR performance graphs logging every single frame (even reported in real-time in the headset so you kow exactly what's happening at any moment) or check the log afterwards and tell you that there are no frame drops and frame's never go over 11.1ms (thus never engaging reprojection). Adaptive Quality (which is used in all of The LAB experiences, Valve Destinations and all Unity 5.4 games using Valve's open-sourced renderer) was broken in SteamVR in certain scenarios at least since July 27th as has been reported by many users/developers in the steam forums (and Unity's). I personally never experienced it on the PC running the Fury X so I can't say if the performance was as bad when the bug was encountered. The simple fact that he clearly doesn't want to re-run the test now that the issue is fixed is enough to say that he's a scumbag.

I can also confirm that UE4 VR games/experience run like crap on AMD GPUs and look like shit (on both NVIDIA and AMD GPUs). People should stop looking at normal screenshots or videos to base their opinion on VR. What's projected in the headset in motion (in the case of UE4 on PC and soon the PSVR BTW) is currently vastly inferior because of the poor IQ. Even Oculus had to write their on UE4 renderer for Christ's sake!
 
Last edited:
Let me reiterate that everything using UE4 for VR (besides Oculus's own demos using the modified renderer) is simply horrible compared to Unity and Source 2.

So what? UE4 is not being used here as a synthetic VR benchmark. We're talking about actual VR content that people are buying these headsets for. If AMD performs poorly on UE4 content, then that's the beginning and the end of it as far as consumers are concerned. If I were giving someone purchasing advice for a GPU for VR right now and they were interested in something like Raw Data, then the only safe advice to give them is to point them to a 980ti/1070/1080. Bemoaning how the UE4 engine is ill-suited for VR right now or how Nvidia has an unfair advantage doesn't change that there was and will continue to be developers that opt for UE4 over Unity for a number of legitimate reasons (preference for c++, past experience with UDK, preference for their licensing policies, etc). If people are facing the prospect of dumping $800+ in VR hardware and reorganizing their living space for it, then they need to be aware what that "VR Premium" sticker on the RX480 box actually represents, and particularly so for Steam where there's no rigid curation policy for performance like there is on Oculus Home.
 
So what? UE4 is not being used here as a synthetic VR benchmark. We're talking about actual VR content that people are buying these headsets for. If AMD performs poorly on UE4 content, then that's the beginning and the end of it as far as consumers are concerned. If I were giving someone purchasing advice for a GPU for VR right now and they were interested in something like Raw Data, then the only safe advice to give them is to point them to a 980ti/1070/1080. Bemoaning how the UE4 engine is ill-suited for VR right now or how Nvidia has an unfair advantage doesn't change that there was and will continue to be developers that opt for UE4 over Unity for a number of legitimate reasons (preference for c++, past experience with UDK, preference for their licensing policies, etc). If people are facing the prospect of dumping $800+ in VR hardware and reorganizing their living space for it, then they need to be aware what that "VR Premium" sticker on the RX480 box actually represents, and particularly so for Steam where there's no rigid curation policy for performance like there is on Oculus Home.
Is that really safe advice though? Everyone keeps hating on async compute as an AMD marketing gimmick, but the low level APIs are using it and even Nvidia decided they needed to integrate it, along with preemption, into Pascal before it really hit the public consciousnesses. Seems odd considering they had reasonably good utilization without it.

What makes more sense is that it's extremely beneficial for ATW and VR output. If that's the case recommending a 980ti with it's current async capabilities might not be very justified. For a bit more a 1070 might be a far better choice for a number of reasons. For indie developers they likely aren't doing a lot of modification to the rendering code in UE or other engines. So if Epic roll out some new code the picture could change substantially. While Maxwell2 might be able to run the code, Pascal might be a far more attractive option with the techniques. A 480 might be an even better experience, but at this time there's no definitive way of knowing unless a dev says something. A dev capable of writing their own engine and working with the various IHVs under NDA. So any current recommendations should be with a huge grain of salt.
 
Is that really safe advice though? Everyone keeps hating on async compute as an AMD marketing gimmick, but the low level APIs are using it and even Nvidia decided they needed to integrate it, along with preemption, into Pascal before it really hit the public consciousnesses. Seems odd considering they had reasonably good utilization without it.

What makes more sense is that it's extremely beneficial for ATW and VR output. If that's the case recommending a 980ti with it's current async capabilities might not be very justified. For a bit more a 1070 might be a far better choice for a number of reasons. For indie developers they likely aren't doing a lot of modification to the rendering code in UE or other engines. So if Epic roll out some new code the picture could change substantially. While Maxwell2 might be able to run the code, Pascal might be a far more attractive option with the techniques. A 480 might be an even better experience, but at this time there's no definitive way of knowing unless a dev says something. A dev capable of writing their own engine and working with the various IHVs under NDA. So any current recommendations should be with a huge grain of salt.
AMD has already made Async Compute partially "available" on DX11 to developers via the LiquidVR SDK. But then again given their current market share not many will bother with it and will probably wait until the big engines finally fully move to DX12 (which is far from being the case for UE4)

https://github.com/GPUOpen-LibrariesAndSDKs/LiquidVR

LiquidVR™ provides a Direct3D 11 based interface for applications to access the following GPU features: Async Compute, Multi-GPU Affinity, Late-Latch, and GPU-to-GPU Resource Copies1,2.

  • Async Compute : Provides in Direct3D 11 a subset of functionality similar to async-compute functionality in Direct3D 12.
 
Last edited:
For a bit more a 1070 might be a far better choice for a number of reasons

Yep, I would also choose a 1070 over a 980ti, but not because of anything relating to ATW, better preemption granularity, etc but rather simply that it's newer and Nvidia will likely carry it with general performance updates for a longer period than the aging 980ti.

All of the other 'if's, 'might's, etc regarding the future impact of async shading and preemption to better accommodate the compositor are nice fodder for speculative internet discussion (and also a source of comfort for folks with poorer performing hardware), but it should be made clear that it's little more than that when it comes time to providing purchase advice for other people, and particularly so when that speculation seems to point counter to what past and present evidence shows.

Conventionally faster GPUs provide faster frame times in VR. Faster frame times give you more breathing room for not dipping into Valve's ugly reprojection scheme (which is something I would consider to be unusable for regular gameplay), as well as higher buffer sizes (which is also something that I would consider crucial for certain content.) Optimizations that have significant impacts on frame times seem to be the ones that are actually giving real world payoffs (MRS, multi-GPU, etc.)
 
All of the other 'if's, 'might's, etc regarding the future impact of async shading and preemption to better accommodate the compositor are nice fodder for speculative internet discussion (and also a source of comfort for folks with poorer performing hardware), but it should be made clear that it's little more than that when it comes time to providing purchase advice for other people, and particularly so when that speculation seems to point counter to what past and present evidence shows.
Do you have any evidence to support this? Because it seems both AMD and Nvidia think async is necessary along with all the major players in VR who are spending a lot of money to make it happen. All evidence seems to suggest quite the opposite of what you're saying so your opinion is rather curious.
 
Perhaps we should spell things out a little more clearly to make sure we're on the same page. What "async" are you specifically talking about here, and in what capacity are you expecting it to impact either general gaming performance or specifically VR?
 
Timewrap/reprojection only has any relevance if the GPU (and its driver!) is failing to keep up the required minimum frame rate. But you always want to avoid it for the best VR experience. When it does happen though, producing timewrap/reprojection frame should not be a costly operation. I'll be surprised if it takes up more than 1ms. Even if AMD cards can do timewrap for free but can not keep up minimum (real frame) rendering time most of the time like a 980 Ti could, then it is not a better choice in any way.

To illustrate:

980 Ti
Average frame rendering time: 8ms (> 90 FPS)
Timewrap rendering time: 1ms <- higher cost, but never needed.

Fury X
Average frame rendering time: 12ms (< 90 FPS)
Timewrap rendering time: 0ms <- infinitely faster (generously speaking), but needed all the time because GPU/driver can't keep up.

The 980 Ti will give you better IQ and experience, thus the better choice.
 
Back
Top