Nvidia Pascal Announcement

Simultaneous multi-projection allows Pascal to “warp” an image to align it properly rather than attempting to stretch one frustrum across three displays. It corrects for the type of distortion you see above. While we haven’t seen the impact of SMP on VR yet, the improvement to multi-monitor gaming is substantial. Nvidia is projecting that its total VR performance will be up to 2-3x higher than current cards, which would make a single GTX 1080 faster than two Titan X’s in SLI.
....
The big-picture takeaway is this: If you bought a GTX 980 or 970 back in 2014, Pascal is going to offer some enormous overclocking range, vastly improved VR performance, and all the benefits enthusiasts have been hoping 16nm FinFET would deliver. The GTX 1070 in particular should hit the sweet spot of the upper-midrange / lower-high-end, just as the GTX 970 did in 2014.

http://www.extremetech.com/gaming/2...070-faster-than-titan-x-at-a-much-lower-price
 
Yeah ties into my points quite awhile ago with the native single pass rendering multiple windows or points of view as they are aligned technologies as seen in the presentation around 55mins and 1hr 4mins.
Still would be great to know how it is implemented in Pascal though as it looks to be very interesting not just for VR but also as the article mentions correcting 3d environments across 2-3 screens.
Cheers
 
Last edited:
Looks like the viewport is allowed to be a non-linear transformation now, and you may choose a different projection matrix per viewport?

Latter one should have been possible before, if you amplified via the geometry shader and the used the viewport-array extension to route accordingly. The first one wasn't yet, and it wasn't accessible before at all, since it happened only after scissoring and culling anything outside the viewport.

Change in hardware might be, that the viewport transformation is now actually represented by a regular 3x3/4x4 matrix, and not just a packed float array consisting of offset + prescaler for both axis. Sounds at least like the simplest explanation.

Is scissoring and clipping to the viewport performed by a shader program chained to the last vertex stage or "in hardware" inside the raster engine? That would either make this a soft- or an actual hardware modification.

(I was actually always under the impression that the viewport was expressed by a full matrix, not just a compacted form, and that the interfaces only letting you specify scale and offset were just oversimplifying stuff. That's why I first assumed it had been possible before.)
 
Last edited:
I am very interested about EU pricing for 1070. In my country its gonna be astronomic, but maybe I can get my hands on imported card [or more likely, cheaper Polaris 10].
 
Scissoring and clipping is typically done in HW. Also I'd be surprised if they changed the viewport transform since it would effect such a major API feature and it would require an API update. It's more likely they simply allow to project a vertex twice in the vertex shader.


Sent from my iPhone using Tapatalk
 
Looks like the viewport is allowed to be a non-linear transformation now, and you may choose a different projection matrix per viewport?

Latter one should have been possible before, if you amplified via the geometry shader and the used the viewport-array extension to route accordingly. The first one wasn't yet, and it wasn't accessible before at all, since it happened only after scissoring and culling anything outside the viewport.

Change in hardware might be, that the viewport transformation is now actually represented by a regular 3x3/4x4 matrix, and not just a packed float array consisting of offset + prescaler for both axis. Sounds at least like the simplest explanation.

Is scissoring and clipping to the viewport performed by a shader program chained to the last vertex stage or "in hardware" inside the raster engine? That would either make this a soft- or an actual hardware modification.

(I was actually always under the impression that the viewport was expressed by a full matrix, not just a compacted form, and that the interfaces only letting you specify scale and offset were just oversimplifying stuff. That's why I first assumed it had been possible before.)

Are they doing it via hardware or is it a combination of hardware and software? SDK that developers have to implement into their engines/games so the view ports are merged and transposed properly?
 
Has Nvidia said anything about async? Does it have any chance of competing with AMD on async? For the sake of future-proofing, this might be what decides my next gpu.
 
Has Nvidia said anything about async? Does it have any chance of competing with AMD on async? For the sake of future-proofing, this might be what decides my next gpu.


You will get more info on this in the reviews but async compute yeah its in there.
 
Last edited:
http://videocardz.com/59718/nvidia-gtx-1080-gtx-1070-founders-edition-explained

Finally better understanding of what the founder's editions are

only for people who what that shroud.....

seems kinda vain lol. 100 bucks of a hunk of aluminum no thank you.
The reason this exists is because there is only the reference design at launch and likely will only be reference designs for months. So they can sell the card for $100 more and market it to people with a work like Founder's edition.
 
Anybody knows why simultaneous multiprojection doubles speed in VR.
I understand vertex processing overhead is reduced by 2.

But I would think you also need to double triangle rasterization speed.
and also double shading and ROP speed.
 
Anybody knows why simultaneous multiprojection doubles speed in VR.
I understand vertex processing overhead is reduced by 2.

But I would think you also need to double triangle rasterization speed.
and also double shading and ROP speed.
The demonstration did on purpose use a high res model, so vertex overhead was the limiting factor.

For VR headsets, it's not so much the simultaneous projection of both eyes, but the slicing of the original viewport to better match the distortion, so you don't waste rasterization speed on triangles which would otherwise only have gotten downscaled in a final lense correction post processing step.
 
only for people who what that shroud.....
Thank you for finding and posting this. Means you saved me the equivalent of $100. ;)

Linked article text said:
Rather than sell new polygonal reference design at a lower price, NVIDIA decided to treat it as special edition and sell it for more money.
Oh, Nvidia... Never stop being you.
 
Edited as the linked info changed. For reference:

145950436379jEKuNgdA_4_2.gif
 
Last edited:
Compared to a Fury X it still falls short and the improvement over a stock 980Ti fits with the 20% improvement announced by Nvidia. However, pure brute flop increment should make the gap bigger, the same than a real improvement in async compute. For now (drivers?) Pascal seems much less efficient per flop than Maxwell.
Lol ... you really need to wait till the reviews come out.
 
Back
Top