I don't see that limitation changing soon.
Were talking consoles, offcourse that limitation will be gone and RTX is a waste.
I don't see that limitation changing soon.
So if we're trying to determine the validity of the video, how do we trust the video?It’s from the YouTube video.
The "full CU" in that patent application has "simple" ALUs with 4 times the execution units of the "full" ALUs (4 'full' vs 16 'simple'); they do not disclose the ratio for the "small CU".We all know a CU is composed of 64 shader cores. But how many for a small CU?
If it is composed of 6 shader cores, then a GPU with 52 CU+52 SCU would give us 3640 shader cores.
Bounding Volume Hierarchy is essentially a tree of bounding boxes - how much "research and experimentation" you need for a data structure that contains 2 memory pointers and 8 XYZ coordinates?I think we loose more opportunities of software and hardware evolution by adopting Nvidia's paradigm than we win.
It would take at least Snapdragon 8180 / SDM1000, and probably several more generations, until ARM chips could be considered for real game consoles (i.e. those that do not stick to 40 years old platformer games in a handheld form factor).an ARM based console with a good gpu would be interesting to see
If it's fake, whoever made it should be immediately hired by Nintendo.The days of, it looks too good to be fake are looooong gone.
As you said, people even make up physical mock ups also now.
Does SMT require a special hardware/part of the chip?
Bounding Volume Hierarchy is essentially a tree of bounding boxes - how much "research and experimentation" you need for a data structure that contains 2 memory pointers and 8 XYZ coordinates?
Yes, Nvidia's subdivision algorithm is proprietary and tied to their fixed function hardware, but what would be the use of letting the developers full control of the parameters, other that possibly stalling the BVH traversal hardware? NVidia's been doing their homework on BVH (see for example
I sort of agree. I am trying to maintain it within the boundaries of why I think nvidia's route of BVH acceleration could be negative for a next gen console. But I also agree it is maybe more about Ray tracing than about consoles.It feels like the entire discussion around RayTracing and BVH should be moved out into the Graphics > Rendering Technology and APIs forum as I don't see any of it having anything to do with "Next Generation Hardware Speculation".
Almost certainly.
SMT is the term for the general concept of running multiple threads through the same core by allowing different threads to take up different resources in parallel, vendors may or may not give their marketing names to their individual implementations.Thanks. Can the be seen in die shots? Do you think it's just extra cache for the hardware side and/all the zen chips being capable of SMT through firmwares?
https://www.quora.com/Is-the-ability-for-CPU-hyper-threading-in-the-software-or-hardware
For Intel it seems there's actual physical additions to the CPU.
SMT is AMD's HT right?
Given the generation, tdp, size, clock of the arm cpu in the switch and the fact that its not just running 40 year old platformers is enough to show that a console part wouldn't be that far away.It would take at least Snapdragon 8180 / SDM1000, and probably several more generations, until ARM chips could be considered for real game consoles (i.e. those that do not stick to 40 years old platformer games in a handheld form factor).
You're moving the goalpost. Research on algorithms will continue, nobody is denying that. RTRT research is going strong and showing improvements year after year. Where's the research showing the elimination of SDF limitations in real time graphics? Should we also assume that a rasterization algorithm that is just as good (or better because why not) as path tracing will appear too because "all rendering tech is going to advance"? And all of it within the span of next-gen's console cycle?Hogswash. It's based on the past 20 years precedent in how graphics tech has advantage. Do you genuinely believe that going forwards, all rendering technology is going to stagnate on what we have now? That if RT wasn't introduced, we'd be looking at no algorithmic advances at all??
All rendering tech is going to advance. Given raytracing hardware, devs will find ways to use it in novel ways to get better results. Given more general compute and ML options, devs will find new ways to use it. There's zero wishful thinking about it - it's a certainty based on knowledge of how humanity operates and progresses, and the fact we know we haven't reached our limits.
Why would a dev use a physics system that relies on a BVH but not ray casting when they have RT acceleration hardware at hand? Are there even any physics systems that work like that? That's like asking: "what if a game makes use of a rendering system based on quads or some other primivitve other than triangles? Rasterization hardware is obviously a waste of sillicon!" Game tech is based mostly on the hardware it's intended to run on. But even considering such an edge case, should console hardware design be based on the general case or hypothetical extreme outliers like the one you proposed?That's a very simplistic way to wave off the issue.
If most of my scene is a perfect fit for the specific silver-bullet way Nvidia's driver decided to build the BVH except for some parts that would be tremendously more efficient if done another way through coumpute, then sure, just use compute for special cases. You may still be eating up a some redundancies depending on the situation, which in itself is a sorry ineficiency but not the end of the world. Well, for rendering.
But say the gaeme's pysics engine can also benefit from a BVH. But it doesn't rely on ray casts, and there is no easy way to translate whatever queries your physics engine needs into rays so it can use the DXR for that. That means your physics engine will create it's own BVH for the physics through compute, while NVIDIA's black box is creating another one, and is anyone's guess what it looks like, and there is no way to reutilize the work from one process to the other. That is a very sorry inefficiency.
And then there is the case where MOST of your scene would be a much better fit to your own compute BVH system, and you do implement it through compute. Nice, now you've got all RT silicon sitting idle giving you no extra performance because it was designed to do on thing and one thing only. That's another very sorry inefficiency.
But most of all, the most sorry thing, and one which your idea of "just use compute for special cases" ignores completely, is that you loose the contribution of research and experimentation of thousands of game graphics programmers by throwing a black-box into the problem and limiting all that R&D to GPU and API design teams. I undertand some are hoping next gen consoles get some form of RT acceleration similar to Nvidia's so that we get a wide breath of devs experimenting with it. But what I think you are ignoring, is that we leave a whole other field of research opportunities unexplored by doing that. I think we loose more opportunities of software and hardware evolution by adopting Nvidia's paradigm than we win.
You're also denying the possibility of changes to the DXR API in future versions. DX12 is very different from DX1. DXR 1.0 is not the end of the line, it's the beginning.
Hopefully you're wrongI just don't think a DXR 1.5 or 2.0 hardware acceleration is very feasible for a 2020 console. I already think a DXR 1.0 one is unlikely enough. And I'm claiming that's more of a blessing than a loss.
Soooo, does AMD's chiplet CPU design have any bearing on a future APU design?
Does it, in theory, allow for more easily scalable, easily manufactured designs? At least, in an age of two tier consoles.
Just "plug in" different quantities of higher and lower clocked CCX's, memory, and whatever configuration of GPU.