Nvidia Ampere Discussion [2020-05-14]

this guy speaks the truth, and unlike the "tech youtuber I am not gonna mention by name" -his words- he never jumped on the back of that theory. Igor the guy who came up with that theory also said that it was a possible cause. He could be wrong, but at least he offered a possible explanation 'cos he knows his stuff and it was interesting, but when a "I am cool know-it-all" like jayztwocents appeared for giving his unwanted, not knowledgeable opinion, the issue got a bit more complicated and drama ensued.

I blocked jayztwocents videos on youtube --took the bait once, not anymore. He believes he is something he is not. This is a feeling I have with some youtubers, even the guy with the long hair that start believing that they speak higher words than those who are smarter than them and work for a company like nVidia.

You won me a subscription to his channel, at least he is more humble, and smarter.

Yeah, “harbour on box” is really great. One of the few channels I enjoy.
 
I blocked jayztwocents videos on youtube --took the bait once, not anymore. He believes he is something he is not. This is a feeling I have with some youtubers, even the guy with the long hair that start believing that they speak higher words than those who are smarter than them and work for a company like nVidia.
Gamer's Nexus said some very stupid things about the path tracing performance of Ampere over PCI Express 4.0. Literally totally ignored the minimum frame times showing on the graph as he spoke (far better than PCI Express 3.0). He went down several notches in my estimation right there.
 
Gamer's Nexus said some very stupid things about the path tracing performance of Ampere over PCI Express 4.0. Literally totally ignored the minimum frame times showing on the graph as he spoke (far better than PCI Express 3.0). He went down several notches in my estimation right there.
What path tracing application was he testing? Sometimes depending on the application (how the benchmark works) minimum frametimes can be meaningless due to too much variance in the results from test to test.
 
Last edited:
Gamer's Nexus said some very stupid things about the path tracing performance of Ampere over PCI Express 4.0. Literally totally ignored the minimum frame times showing on the graph as he spoke (far better than PCI Express 3.0). He went down several notches in my estimation right there.


Are you talking about
?

Because I don't see what you're talking about.
 
To me this point was included in his statement a little later , saying that yes there are some improvements, but you won't notice it.
 
To me this point was included in his statement a little later , saying that yes there are some improvements, but you won't notice it.

But isn't his whole video created to find differences between PCIe 3 and 4?
If the video was made to tell users - no you mostly, subjectively will not notice a difference with current games and applications while using them - then his omission would be warranted.

I like most of Steve's videos, but sometimes he is accidentally or purposefully omitting certain data and even creates wrong conclusions.

Sadly there is not a single channel or publication which covers everything I'm interested in, in sufficient details, hence I try to watch/read more than few and then come to my own conclusions based on data they produced.
 
But isn't his whole video created to find differences between PCIe 3 and 4?

If the video was made to tell users - no you mostly, subjectively will not notice a difference with current games and applications while using them - then his omission would be warranted.


I like most of Steve's videos, but sometimes he is accidentally or purposefully omitting certain data and even creates wrong conclusions.


Sadly there is not a single channel or publication which covers everything I'm interested in, in sufficient details, hence I try to watch/read more than few and then come to my own conclusions based on data they produced.
It's not clear to me he's omitting anything. Speaking about the Quake 2 RTX benchmarks he says there is a "real uplift" but one that is "probably not noticeable".

So he seems to be claiming that a user will (probably) not notice an increase from 95 FPS to 99 FPS that happens 1% of the time, or an increase from 42 FPS to 47 FPS that happens 0.1% of the time. Is that a completely unreasonable position?
 
Isnt pcie-4 more aimed at nvme drives than graphics cards

Right now yes. But even that, nvme drives on pcie-3 would be fine for most usages imo. Yeah you will feel it in some sequential read/write benchs, but for loading times in games for exemple, I doubt you can see a big difference. Hell, even sata ssds are fine for most usages.
 
It's not clear to me he's omitting anything. Speaking about the Quake 2 RTX benchmarks he says there is a "real uplift" but one that is "probably not noticeable".

So he seems to be claiming that a user will (probably) not notice an increase from 95 FPS to 99 FPS that happens 1% of the time, or an increase from 42 FPS to 47 FPS that happens 0.1% of the time. Is that a completely unreasonable position?

42 to 49. Whether is is noticeable might be subjective and therefore unproductive question.
 
The new Triangle Position Interpolation unit for motion blur effects, does it require additional input from developers? Supposedly there is some performance benefit though not sure how much.
 
The new Triangle Position Interpolation unit for motion blur effects, does it require additional input from developers? Supposedly there is some performance benefit though not sure how much.

Yeah it needs app support and clearly isn’t exposed by DirectX. I think it’s primarily targeted at offline renderers though.
 
PCIe 4.0 as any new technology, will require developers to lean on it before showing decent benefits and market saturation at this moment is not there yet to do it in widely adopted software or games.
PCIe 4 was already shown to matter for video editing as it can provide decent time savings working on large and complex projects.
PCIe 4.0 will also matter for high refresh competitive gamers as there are small but consistent increases in performance with lower resolution / higher frame rate titles (see HW Unboxed video).

Would I loose my sleep over using PCIe 3.0 board with 3080RTX or RDNA GPU in 2020 or 2021? No, as I don't do enough work to warrant lost performance and mainly game between spreadsheets and office apps with fps cap at 200 even in CSGO.
Still, enthusiasts in me can appreciate ever evolving need for higher bandwidth and lower latencies.
 
Depends? I could imagine that if you have a sufficiently complex update to the BVH maybe even concurrent with other transfers, PCIe bandwidth could make an arguably small difference. I would imagine though, that PCIe latency to play a larger role.
 
Depends? I could imagine that if you have a sufficiently complex update to the BVH maybe even concurrent with other transfers, PCIe bandwidth could make an arguably small difference. I would imagine though, that PCIe latency to play a larger role.

I also was wondering how PCIe revisions are going to scale when sigle GPU is ran on multiple VM's. Let's say, scaling from 1 to 8 Virtual Machines sharing single GPU and all of them running a mix of compute and 3D application ...

Wish Wendel L1T would do that!
 
Depends? I could imagine that if you have a sufficiently complex update to the BVH maybe even concurrent with other transfers, PCIe bandwidth could make an arguably small difference. I would imagine though, that PCIe latency to play a larger role.
PCIe 4.0 is mostly an update to the link and interface rather than other parts of the protocol. The base latency is seemingly little-changed, but there were some tweaks to increase the total number of transactions supported in-flight that might help hide latency. I'm curious if a game can spawn enough requests to hit these limits, although a more variable workload might lead to more unique requests.
https://blogs.synopsys.com/vip-central/2016/05/03/full-utilization-of-16-gts-pcie-gen-4-bandwidth/
The physical link management and improved RAS probably shouldn't be affecting much on a consumer board, barring some kind of marginality with the motherboard/GPU.

There may also be a factor related to the broader and less predictable set of assets that might be hit in a path-traced workload versus the same scene without the technique. The system could be loading a greater amount textures or shaders into VRAM to satisfy this expanded footprint, and that may be a case where the PCIe 3.0 transfer capabilities are generally but not always in excess of what is necessary.
If the driver/application heuristics for loading assets have some assumptions that are typically not broken in a rasterized workload, there could be a few instances with a more variable workload where the system needs to do more last-minute swaps, which may show up as the occasionally longer 99th percentile frame times.
 
Yeah it needs app support and clearly isn’t exposed by DirectX. I think it’s primarily targeted at offline renderers though.
Silly idea from couple of days ago..

You get per ray result of when the ray was cast.
In theory you should be able to reconstruct subframes from the information. (Although it most likely would be quite coarse.)
 
Back
Top