Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Looks like a set of tests ramping up in complexity. I assume the Cornell one is the RGB-box room with things in it to look at surface lighting direction/transfer. The bunny might be about angles/AO. I had the impression Sponza is a benchmark for bounce lighting ("Cry" being within Cryengine or including changes/assets Crytek made on there?).

More things showing that a properly coded compute based RT solution isn't nearly as bad compared to RTX as NV would want you to believe.

This makes it more interesting with the AMD patents and if they are more about making existing 3D hardware blocks more efficient at running RT code.

Regards,
SB
 
More things showing that a properly coded compute based RT solution isn't nearly as bad compared to RTX as NV would want you to believe.

This makes it more interesting with the AMD patents and if they are more about making existing 3D hardware blocks more efficient at running RT code.

Regards,
SB
Scene complexity is very small here compared to a real game. That is where RTX is still viable; but I largely suspect compute shaders will not be
 
More things showing that a properly coded compute based RT solution isn't nearly as bad compared to RTX as NV would want you to believe.
Similar thoughts (without conspiracy suspicions :) ) here: http://diaryofagraphicsprogrammer.blogspot.com/2018/09/ray-tracing-without-ray-tracing-api.html (There is also another blog post about the discussed compute implementation with benchmarks somewhere, but can't find it quickly.)

But i do not think NV would make their software fallback slow on purpose. It's just no good idea to implement this API as is, instead you need a custom solution if you want it to be as fast as possible.
 
Conspiracy theories are funny as always, like the xbox with hidden RT dies and all. Interesting is what/how nvidia's next gpu, coming early 2020, will implement RT, if it will differ from the current, think not as performance isn't bad for what it is.
 
But i do not think NV would make their software fallback slow on purpose. It's just no good idea to implement this API as is, instead you need a custom solution if you want it to be as fast as possible.

Perhaps not slow on purpose, but there is no incentive for them to put any effort into making it run as well as it could. Doing so not only involves an investment in time and money, but could also impact sales of their latest cards.

In many ways similar to PhysX. No particular effort was put into making it run well until 2 things happened.
  • Other developers pointed out areas where software PhysX was seriously deficient.
  • Hardware PhysX failed to take off in any significant manner.
Regards,
SB
 
How do these benchmarks compare with RTX performance? Seeing the software emulation of DXR is weak doesn't necessarily mean RTX inclusion isn't extremely valuable in performance/mm².
 
How do these benchmarks compare with RTX performance? Seeing the software emulation of DXR is weak doesn't necessarily mean RTX inclusion isn't extremely valuable in performance/mm².
Sure if they included (some) RT-HW there is (1) an advantage over not specialized HW in rendering (2) an advantage over not specialized HW in non-rendering tasks that can lead in nice developments possible for game realisation... I believe more in the second case.
 
Conspiracy theories are funny as always, like the xbox with hidden RT dies and all. Interesting is what/how nvidia's next gpu, coming early 2020, will implement RT, if it will differ from the current, think not as performance isn't bad for what it is.

My conversation with misterXmedia continues on my website... I will not engage in any kind of adjectivation with it, so I will accept what he has to say. But i'm refuting it all. It's theories are way, way crazy, specially since they not only include scarlett, but also, Xbox One X and the original Xbox.
 
How do these benchmarks compare with RTX performance? Seeing the software emulation of DXR is weak doesn't necessarily mean RTX inclusion isn't extremely valuable in performance/mm².
Not provided. It was just a port of CUDA to hlsl to test performance differences mainly around the emulation portion of DXR. I am curious to see what the levels are with RTX on.

anyone with thoughts on why FXC is performing better than DXC? I was under the assumption DXC should be the better compiler.
 
Last edited:
anyone with thoughts on why FXC is performing better than DXC? I was under the assumption DXC should be the better compiler.
No experience here, but i have seen differences between OpenCL and Vulkan compilers on AMD GPUs. On average VK was 10% faster, but i had no VK profiling tools and don't know the reason.
However, there also were exceptions where CL was faster, so a single raytracing shader may not be enough to judge between compilers overall.
 
Sony concerned about PS5 price... they tought a wider price reduction could going on faster than it is in reallity ? For us this "concern" is good a good news IMHO....
 
Just...no!

Link to same series of patent just explain the queue system used in one patent but giving details.

EDIT: this is the same patent but with diagrams in english, easier to understand.

There is only three SSD patent one for describe the general system, another for queue management and one last for saving energy using SRAM during low power mode.
 
Last edited:
Link to same series of patent just explain the queue system used in one patent but giving details.
The patent I saw is talking about input methods and robots of some form, and is nothing to do with an SSD.

A robot platform 100 for implementing embodiments of the present invention may take the form of any suitable robotic device, or simulation of a robotic device, as applicable.

Figure 1 illustrates front and rear views of an exemplary legged locomotive robot platform 100. As shown, the robot includes a body, head, right and left upper limbs, and right and left lower limbs for legged movement. A control unit 80 (not shown in Figure 1) within the body provides a control system for the robot.
 
There is only three SSD patent one for describe the general system, another for queue management and one last for saving energy using SRAM for during low power mode.

This reads like a lightweight/optimised variant of NVMe. There is a fair amount of cruft in NVMe that I doubt PS5 needs; 64k command queues each allowing 64k commands (four billion commands), namespaces, NVMHCI, I/O virtualisation - all useful on workstations or server but which add complexity (and therefore cost) and slow throughput on a device with more predictable and less sophisticated I/O needs.
 
This reads like a lightweight/optimised variant of NVMe. There is a fair amount of cruft in NVMe that I doubt PS5 needs; 64k command queues each allowing 64k commands (four billion commands), namespaces, NVMHCI, I/O virtualisation - all useful on workstations or server but which add complexity (and therefore cost) and slow throughput on a device with more predictable and less sophisticated I/O needs.

Yes, with the SRAM it would be a second elements of the custom SSD cheaper than a standard NVMe SSD
 
Last edited:
Status
Not open for further replies.
Back
Top