Nvidia Pascal Announcement

Look at the date,,,,,
This would mean it is an engineering sample belonging to one of the AIB-manufacturers or close partner to NVIDIA, very unlikely to be a true result.
Case in point look how many 3D Mark related benchmarks are out there for 1080 that actually look possible, those labs-partners employees are more likely to be using that than Ashes.
Personally I feel only one of those 3D Mark results was possibly real (not the Fire Strike one posted) for reasons I gave in the past.

Compounding this, didn't they do something to the DX12 code path for NVIDIA cards with Ashes of the Singularity, meaning changes are needed to be done to make it work better with Pascal?

Cheers
 
The Four New GeForce GTX 1070 and 1080 Related Technologies:
SMP - Simultaneous Multi-Projection
Nvidia Ansel
VR Audio
New SLI bridge
SMP - Simultaneous Multi-Projection
New with Pascal is simultaneous multi-projection, a technology that allows the GPU to calculate a maximum of sixteen camera view points, at nearly no performance loss. Previous gen GPU-architectures can only do the math on one came viewing angle / point of views. This features is not software based, it is located in hardware in the GPU pipeline.

So why would you want to be able to do this you might wonder ? Well, I spilled the beans already a bit in the starting paragraph. Currently when you game on multiple screens you are looking at a 'Projection' of one 2D image. Now if you have one 2D Image on three monitors, then it's only going to look good if the monitors are standing in straight line next to each other. When you angle of 'curve' the monitors around you the angles will distort the image, e.g. a straight line would have a bend. Some games have fixes for this, but nothing solid. Well, with Pascal this is a thing of the past as this is exactly where Simultaneous Multi-Projection comes in. With the help of your driver control panel you can alter the angle of your monitors that matches how you have setup the monitors. The 3D imagery will now be calculated for each screen on the GPU based on the angle of your monitors. So if you would surround yourself with three monitors, the rendered images displayed will not be warped, but will be displayed correct.

The beauty here is that due to the added core-logic on the GPU, this angle correction does not come at a performance loss, or at least a very little one. SMP obviously also helps out in VR environments where typically you need to do all kinds of tricks for the normally two rendered images versus lenses and warping. To be able to do this in one pass in hardware on the GPU will create huge performance increases for the upcoming GeForce GTX 1070 and 1080 on VR. Again, this is hardware based and thus cannot be added to Fermi and or Maxwell models with a driver update.

Nvidia Ansel
Named after a famous photographer Nvidia intros Ansel, a new way of making in-game screenshots. Capturing stills from games we pretty much do on a daily basis here at Guru3D. With the Ansel announcement Nvidia steps it up a notch, Nvidia even called it an Artform (I wouldn't go that far though). Screenshots typically are based on a 2D image taken from a 3D rendered scene. Nvidia figures (with VR in mind) why not grab a 360 screenshot in-game (if the game supports Ansel technology) so that you can grab a still, save it and then later on use your VR headset to look at the screenshot in 3D. it can also be used to create super-resolution screenshots or just 'regular' screenshots and then apply EXR effects and filters.

Ansel offers the ability to grab screenhots in 3D at incredible resolutions, up-to 61.440 x 34.560 pixels with silly sized screengrabs that can be 1.5GB for one grab. Funky however is that Nvidia 'borrowed' some SweetFX ideas. After you captured the screenshot you can alter the RAW data and this make an image darker/lighter, set color tone and thus apply filters at that screenshot (Think instagram effects). While not 'necessary', Ansel was design with VR in mind so that you can grab a still and then alter it and then watch it in 3D with your Oculus Rift or HTC Vive. Ansel will also become available for previous generation products and is not a Pascal specific thing.

VR Audio
Another technology that was introduced is again VR related. Nvidia offers VRWorks, a set of developer tools that allows the devs to improve their games with VR. One of the new additions is VRWorks Audio, this techology makes it possible to simulate GPU reflections and absorption of audio waves within a virtual 3D-space. Basically the GPU will calculate and predict how certain audio would sound if it bounces of a hard floor or a soft one combined with other objects it bounces off. Example, if you talk in a room with concrete walls it would sounds different opposed to that same room with carpets hanging on the walls. The last time that a GPU manufacturers added 3D audio over the GPU it failed to become a success alright, that would be AMD True Audio.

The question arises, will VRWorks audio create enough interest and momentum so that developers will actually implement it ? To demo all the possibilities of VRWorks Nvidia will release a new demo-application soon, it is called Nvidia VR Funhouse. The application will not just demo VRWorks Audio but also Physics simulations in a VR environment.

New SLI bridge
Starting with Pascal, the SLI connectors will change. you cannot use your older SLI bridges with Pascal and vice versa. NVIDIA calls this the SLI HB (with HB being High Bandwidth). Within the bridges two cards will now be connected through two SLI connections. Which kind of makes you thing what will happen with 4-way SLI. So for the new Pascal based graphics card new SLI bridges will be needed. Nvidia is going to sell them in three models so you can adapt to your motherboard setup and PCIe slot spacing.
http://www.guru3d.com/news-story/the-four-new-geforce-gtx-1070-and-1080-related-technologies.html
 
Last edited:
OK no idea if this changed since beginning of September 2015.
Kollock at Oxide went on record to say:
Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports.

From our perspective, one of the surprising things about the results is just how good Nvidia's DX11 perf is. But that's a very recent development, with huge CPU perf improvements over the last month. Still, DX12 CPU overhead is still far far better on Nvidia, and we haven't even tuned it as much as DX11.

So why would one expect a close NVIDIA partner lab-developer with an engineering sample to test with AoTS and DX12?
Where are the performance result leaks for VR or another game with benchmarks that utilise NVIDIA technology?
The last two are more likely to be seen by such an employee with an engineering sample IMO.

Anyway, if the above quote has not changed, then testing in AoTS is currently pointless, and answers my point earlier of requiring multiple paths in DX12 for Pascal and then Maxwell-Kepler.
Cheers
 
Last edited:
OK no idea if this changed since beginning of September 2015.
Kollock at Oxide went on record to say:


So why would one expect a close NVIDIA partner lab-developer with an engineering sample to test with AoTS and DX12?
Where are the performance result leaks for VR or another game with benchmarks that utilise NVIDIA technology?
The last two are more likely to be seen by such an employee with an engineering sample IMO.

Anyway, if the above quote has not changed, then testing in AoTS is currently pointless, and answers my point earlier of requiring multiple paths in DX12 for Pascal and then Maxwell-Kepler.
Cheers
AoTS 1080 suppossed benchmarks (based on the benches published on AoTS web http://www.ashesofthesingularity.com/metaverse#/ladders/benchmark/overall/Crazy_5K?viewType=myself&filters={"gpu":"67DF:C4"}) are all over the place:

http://videocardz.com/59725/nvidia-gtx-1080-polaris-10-11-directx12-benchmarks

PS: Even here they claim to have Polaris bench (quite poor by the way).
 
AoTS 1080 suppossed benchmarks (based on the benches published on AoTS web http://www.ashesofthesingularity.com/metaverse#/ladders/benchmark/overall/Crazy_5K?viewType=myself&filters={"gpu":"67DF:C4"}) are all over the place:

http://videocardz.com/59725/nvidia-gtx-1080-polaris-10-11-directx12-benchmarks

PS: Even here they claim to have Polaris bench (quite poor by the way).
Yeah.
Just to clarify my post expands upon my earlier one 703, which is these are more than likely fake results and also it is meaningless testing DX12 AoTS with Pascal if the situation Kollock explains is still true with their DX12 path, along with DX12 games will need multiple paths for NVIDIA cards now.
Cheers
 
Yeah and using different benching systems, adds too many variables too, even if they were real, I would still take a look at it and wait for something more credible.
 
AoTS 1080 suppossed benchmarks (based on the benches published on AoTS web http://www.ashesofthesingularity.com/metaverse#/ladders/benchmark/overall/Crazy_5K?viewType=myself&filters={"gpu":"67DF:C4"}) are all over the place:

http://videocardz.com/59725/nvidia-gtx-1080-polaris-10-11-directx12-benchmarks

PS: Even here they claim to have Polaris bench (quite poor by the way).
As far as the Polaris benches, I wouldn't put too much weight on them yet:
67DF:C4 (POLARIS 10) 50,60 58,00 54,00 42,50
67FF:C8 (POLARIS 11) 40,00 51,10 41,30 32,10
67DF:C4 (POLARIS 10) 22,30 24,20 21,80 21,30

1080p low, 2 same codename "Polaris 10s" with over 100% difference in FPS, and one being barely half the speed of Polaris 11?
 
Anyway, if the above quote has not changed, then testing in AoTS is currently pointless, and answers my point earlier of requiring multiple paths in DX12 for Pascal and then Maxwell-Kepler.
Don't bother with that quote, it has changed. Oxide has re-enabled the dedicated compute queue for all architectures per default, and they did so not because they've changed anything major on the code itself, but because the driver got a new profile/heuristic to eliminate the performance penalty. This is only fixing it for this specific game though. You can verify this by cross-testing older builds of the game and the driver.
 
Why is no1 talking about the lack of support for HDR? this is for me the biggest deal breaker in this series and the reason why I wouldn't consider buying one. AMD already said they will full support this and for me the HDR is the best improvement in video technologies since the full HD.

Does any one have an opinion about this? Really Would like to hear your though in this topic.
 
Why is no1 talking about the lack of support for HDR? this is for me the biggest deal breaker in this series and the reason why I wouldn't consider buying one. AMD already said they will full support this and for me the HDR is the best improvement in video technologies since the full HD.

Does any one have an opinion about this? Really Would like to hear your though in this topic.
I'm not as strong as others in the GPU topic, but I feel as though nvidia is still working with gsync right now, I'm not sure if there are things that need to be worked out for gsync with HDR monitors. I assume it would cost substantially more to have both features in a monitor and/or perhaps from a business perspective it is just not there yet.
 
Why is no1 talking about the lack of support for HDR? this is for me the biggest deal breaker in this series and the reason why I wouldn't consider buying one. AMD already said they will full support this and for me the HDR is the best improvement in video technologies since the full HD.

Does any one have an opinion about this? Really Would like to hear your though in this topic.
https://developer.nvidia.com/introducing-nvidia-geforce-gtx-1080
With support for DirectX 12 and Vulkan graphics APIs, these cards are designed to drive display devices beyond 5K including HDR as well as delivering the performance required for truly immersive VR.
 
I'm not as strong as others in the GPU topic, but I feel as though nvidia is still working with gsync right now, I'm not sure if there are things that need to be worked out for gsync with HDR monitors. I assume it would cost substantially more to have both features in a monitor and/or perhaps from a business perspective it is just not there yet.
Yes it would but there are TVs that are HDR cap. the the HDR monitors will hit market this year. Also AMD said that Polaris will have a full support for it...And well with the importance of HDR I was really disappointing that Nvidia didn't even try a partial support for it. Again this may be a result of Pascal just being Maxwell with a VR focus and not much more...But still.
 
Why is no1 talking about the lack of support for HDR?

They did claim the new cards are DisplayPort 1.3/1.4 Ready, meaning they should be able to drive HDR displays at least on what concerns to physical connections.
As for HDR being supported in games, AFAIK it's mostly a matter of software support.

Also AMD said that Polaris will have a full support for it
And they also said that current GCN 2/3 cards would support it through a driver update, so it's possible even on DisplayPort 1.2.




Regardless, I think for the moment only OLED screens have HDR support (which makes sense because of their "infinite" contrast), and those aren't getting cheap anytime soon.
 
Last edited by a moderator:
They did claim the new cards are DisplayPort 1.3/1.4 Ready, meaning they should be able to drive HDR displays at least on what concerns to physical connections.
As for HDR being supported in games, AFAIK it's mostly a matter of software support.
Actually not just software. You need a hardware implementation (the way you code the colors, the brightness etc). AMD have practical support for HDR in Fury and i think, not sure, in the 380(games and pics but not videos since the standards for HDR videos weren't complete at the time)

But what makes me doubt about this VGAs being cap. of HDR is the lack of marketing about it..Again HDR is the best improvement in image quality since FullHD. AMD released several marketing material about it, but Nvidia did not.
 
Don't bother with that quote, it has changed. Oxide has re-enabled the dedicated compute queue for all architectures per default, and they did so not because they've changed anything major on the code itself, but because the driver got a new profile/heuristic to eliminate the performance penalty. This is only fixing it for this specific game though. You can verify this by cross-testing older builds of the game and the driver.
Thanks Ext3h,
sorry to be a pain but can you link where this was clarified.
The headache with that is knowing which NVIDIA driver version it is in, and how it changes from that initial one....
So this means Oxide removed that rendering path at the time NVIDIA released a new driver?
And now complicating in a different way if true knowing if it is also in the current Beta driver developed for Pascal.
With my Edit, you sure it definitely changed?
Thanks again.

Edit:

BTW this must be very recent because even in Feburary 2016 Kollock was still saying it was disabled in the rendering path for the public AoTS.
Async compute is currently forcibly disabled on public builds of Ashes for NV hardware. Whatever performance changes you are seeing driver to driver doesn't have anything to do with async compute.
That was the 16th February 2016.
He went on to say in that same post:
I can confirm that the latest shipping DX12 drivers from NV do support async compute. You'd have to ask NV how specifically it is implemented.
 
Last edited:
Back
Top