This is a benefit to HFR that I never even considered.
We had this exact discussion in another thread, but good to see it verified by sebbbi hereThis is a benefit to HFR that I never even considered.
I think some of the leaks indicated there were some lower-level API commands for the Xbox One that allowed developers to tweak items like CU allocation patterns. It's possible other low-level settings like giving a certain number of CUs for one part of the workload or allowing the GPU to allocate as much as it can for a given shader type.That shouldn't be the case at all. GPU commands are dispatched and dealt with by the hardware schedulers. No code should have any idea what the CU configuration is and that should never impact whether code runs or not.
Higher clocks, absent downsides like power consumption and feeling memory latency more acutely, would be a more generally useful value to scale since additional clock benefits serial and parallel algorithms.Just as there's advantage in faster, fewer CPU cores than more, slower cores, is there a case for GPU workloads, especially compute, where higher clocks and less parallelism is better? Perhaps they chose 2GHz and fewer CUs because they can, for the same total throughput as 1.5 GHz and 4/3 times as many CUs but faster individual thread (warp) execution. Also the schedulers and everything else will be running that much faster. [/theory]
This is a benefit to HFR that I never even considered.
I think if you simply joined two frames of a 120Hz flow into one for 60Hz you'd only blur things out, not bring any more detail.So for a 60Hz display would be you still render 120fps and have more data for reconstruction?
So for a 60Hz display would be you still render 120fps and have more data for reconstruction?
Would that high fidelity motion blur offer more or less detail than your prom fantasy just did?Accumulating 2 120hz frames into one 60hz one would create ghosting artifacts. Unless they do uneven timestepping between frames keeping them both within the bounds of a 60Hz render with reasonable shutter speed for it's moblur. In that case, 60Hz monitors would essetially get super sampled motion blur, which sounds so high fidelity my legs shake like I'm a highschool freshman girl being asked to prom by Jeff, a senior and team captain of the school's winning football team.
Would that high fidelity motion blur offer more or less detail than your prom fantasy just did?
If you have HBM2E, you can get on the order of 60+ gigabytes per second per millimeter of die edge. You can only get about a sixth of that for GDDR6. And I can only get about a tenth of that with LPDDR5. So if you have a very high bandwidth requirement, and you don’t have a huge chip, you don’t have a choice. You’re going to run out of beachfront on your SoC if you put down anything but an HBM interface
So for a 60Hz display would be you still render 120fps and have more data for reconstruction?
https://semiengineering.com/dram-tradeoffs-speed-vs-energy/
That's an interesting metric: GB/s per mm of die edge.
Hbm2e = 60GB/s per mm
Gddr6 = 10GB/s per mm
Lpddr5 = 6GB/s per mm
Assuming currently available speeds, which is 410GB/s per stack, gddr6 at 16gbps, lpddr5 at 6400Mbps, it gives an idea of how much edge space is consumed to fit a certain width of memory.
2 stack = 14mm
256bit gddr6 = 51mm
384bit gddr6 = 77mm
256bit lpddr5 = 34mm
While gddr5 isn't mentionned, from the signals list it had about 20% fewer lines required per chip than gddr6. So this gen would be:
256bit gddr5 = 41mm
384bit gddr5 = 62mm
There has to be enough space left for pcie channels and all the rest other than the memory.
As an example, with a wild guess of 75% of the edge for memory and 25% for eveything else, and a chip of 360mm2 (19mm x 19mm, total edge space is 76mm), 384bit would not fit, 320bit would be borderline. It also becomes clear that having a split memory could only work using HBM for one of them.
Are you sure about that? Sounds wrong to me. Motion smoothing of lower-framerate content up to 60 fps is possible, but motion smoothing of 60fps material to a virtual 120 Hz can't be done, and all you can do is add more motion blur the represent movement during the 1/60th second interval.I believe it's simple motion interpolation. Most modern LED TVs are capable of motion smoothing (i.e., Motionflow, TruMotion, etc.) which can simulate higher refresh rates (i.e, 120Hz, 240Hz, etc) on 60Hz panel TVs.
Why there is plenty of black stuff on the right of the die? Is there something that shouldn't be needed in a console ?Hmm i think you need to take in account of memory clocks.
We also have hard figures for die edge for a GDDR6 phy controller.
on a 360mm2 die you can fit two more controllers on the left side.
Die edge measurements don't say how much total PCB area is needed for each implementation though. GDDR6 is demanding with the trace lengths AFAIK, so the actual PCB area it takes is significantly larger than HBM or even LPDDR.https://semiengineering.com/dram-tradeoffs-speed-vs-energy/
That's an interesting metric: GB/s per mm of die edge.
Hbm2e = 60GB/s per mm
Gddr6 = 10GB/s per mm
Lpddr5 = 6GB/s per mm
Assuming currently available speeds, which is 410GB/s per stack, gddr6 at 16gbps, lpddr5 at 6400Mbps, it gives an idea of how much edge space is consumed to fit a certain width of memory.
2 stack = 14mm
256bit gddr6 = 51mm
384bit gddr6 = 77mm
256bit lpddr5 = 34mm
There has to be enough space left for pcie channels and all the rest other than the memory.
As an example, with a wild guess of 75% of the edge for memory and 25% for eveything else, and a chip of 360mm2 (19mm x 19mm), 384bit would not fit, 320bit would be borderline. It also becomes clear that having a split memory could only work using HBM for one of them.
Do we know there is such direct association between the width of the controllers circuitry and how they are routed to the edge?Hmm i think you need to take in account of memory clocks.
We also have hard figures for die edge for a GDDR6 phy controller.
on a 360mm2 die you can fit two more controllers on the left side.
Are you sure about that? Sounds wrong to me. Motion smoothing of lower-framerate content up to 60 fps is possible, but motion smoothing of 60fps material to a virtual 120 Hz can't be done, and all you can do is add more motion blur the represent movement during the 1/60th second interval.