No, it's all brand saving exercise.So, maybe NVidia's unlaunching happened because RDNA 3 arrived at AIBs...
So 2X the RT performance and still about 2X less than the competition?
If It has 2x better raster performance, but only 2x better RT performance then that would mean RT performance is the same as was with RDNA2 and no improvement was made there.So 2X the RT performance and still about 2X less than the competition?
No.Maybe I missed the boat but why don't we expect more offloading this génération like nvidia or Intel, or even more ?
They're yet to say anything besides "it gonna have class-leading perf/W".Did they kind of announced somewhere that it would be like rdna2 ?
Even compared to Ada? Does It include RT perf or not?They're yet to say anything besides "it gonna have class-leading perf/W".
That's piss easy.Even compared to Ada?
Idk ask their marketing team, they be busy shilling real_frames™ now.Does It include RT perf or not?
50% better efficiency than RDNA2 is far from enough to beat Ada in RT, compute and rendering. They can only win in old school pure raster workloads that are now "useless" bc you reach 150+ fps at 4k. But I would love to be wrong for the sake of competitionThat's piss easy.
I assume you've profiled N31 over a number of different workloads already?50% better efficiency than RDNA2 is far from enough to beat Ada in RT, compute and rendering
Yea shit that matters, especially down the stack where PPA race gets kinda cuhrazee.They can only win in old school pure raster workloads
Good news, 4k@240 monitors are on the horizon!hey can only win in old school pure raster workloads that are now "useless" bc you reach 150+ fps at 4k
roflBut I would love to be wrong for the sake of competition
Don't you know what ">50%" means? Greater than 50%.50% better efficiency than RDNA2 is far from enough to beat Ada in RT, compute and rendering. They can only win in old school pure raster workloads that are now "useless" bc you reach 150+ fps at 4k. But I would love to be wrong for the sake of competition
There was a new LDS ds_bvh_stack_rtn instruction added to LLVM. How it would fit into the RT traversal kernel is unclear to me, but if I were to venture a guess based on the patch, it probably:
* hosts the BVH traversal stack on the LDS
* feeds (some of) the ray data to the TCP directly from the LDS-hosted stack
* writes the results from the TMU-hosted ray intersection unit to the LDS-hosted stack directly
* returns the result to the shader/VGPR as an indirect reference to the LDS-hosted stack
... given that it "acceses LDS in a complicated way".
The minimum expectation is that this reduces VGPR pressure (8 VGPRs versus 12-16 VGPRs). The utopia expectation is that the LDS gets a ray traversal engine, and so potentially one can now offload RT traversal and co-execute other work now (i.e., simply waiting on lgkm_cnt(0) when you run out of work to co-execute, like your normal LDS accesses).
What's a max. TBP for mobile GPUs these days? 150W?
That would be impressive for a mobile part, unimpressive if it's a sizeable chunk of an N32.
It depends on how much it would be cut down on shaders, bus and clocks.
If it's a 256 bit part, it would be strange to put it against a possible NV competition with a 192 bit bus.
Another point of the equation is the current notebook designs topping at around 150W.
Edit: if it was a N33, it would be amazing. But, Greymon said not a N33.
The strange thing here is that for 1080p (most used resolution on laptops) N33 is already supposed to almost hit those performance levels with similar power envelope.
It would be strange to have two mobile solutions so close in terms of performance but very different in terms of costs.
Something does not add up.
150+-15W yea.150W?
Dawg 6800M versus 3080 Mobile is literally that right now just in the opposite direction.If it's a 256 bit part, it would be strange to put it against a possible NV competition with a 192 bit bus.
Yea it's mobile N32.That would be impressive for a mobile part, unimpressive if it's a sizeable chunk of an N32.
What's a max. TBP for mobile GPUs these days? 150W?