i would love to, but it's hard to discuss that which we know so little about.tss noone ever mentions Caustic Graphics & PowerVR Wizard, like it wasn't an hybrid TBDR/Ray Tracer which silicon released a decade ago...
i would love to, but it's hard to discuss that which we know so little about.tss noone ever mentions Caustic Graphics & PowerVR Wizard, like it wasn't an hybrid TBDR/Ray Tracer which silicon released a decade ago...
I mentioned it repeatedly*, but with so little coverage and so little input from those who know, and zero experience in the field, there's very little to be said! I think we'd all love to hear from those who know about it - what was PVR's solution, how did it perform, and what was learnt from it. There's probably a lot of insider knowledge that's NDA'd though.tss noone ever mentions Caustic Graphics & PowerVR Wizard, like it wasn't an hybrid TBDR/Ray Tracer which silicon released a decade ago...
Thank you for voicing what many here feels already.It's both baffling and frustrating to read this. It is an unrealistic position held by those in romance of discovery of some 'magic' algorithm that hundreds of PHD and masters researchers have not found in the last 40 years.
They did try. It wasn't fast enough. Those dabbling in compute-based volumetric lighting solutions found methods that worked and looked good, but were unusable for realtime applications. The mathematical theories existed but the hardware couldn't do anything with them. It's akin to using machine learning, your new delight! Machine learning and neural nets are nothing new, but now that we have hardware fast enough to implement them across a whole range of applications, people are exploring them again. Neural nets weren't used to develop image upscaling in 1982 despite the theories existing, because it wasn't practically possible. Now it is, and we have an explosion of research into new fields machine learning can achieve, because the hardware now enables it, and not because the theories have suddenly been discovered/invented. If the hardware didn't exist to run these AI techniques a thousand-fold faster than previously, the research into them woudln't be happening. And now that GPUs are 30x faster than the best possible in 2007, devs and experimenters can revisit old ideas and explore new derivatives.Because all this time on PC we've had compute in 2007 since the release of DX11, no one tried it then. No one tried it through kepler, maxwell, pascal..., GCN 1-4. No one tried this.
That's a partial interpretation of what's happening in the industry.They did try. It wasn't fast enough. Those dabbling in compute-based volumetric lighting solutions found methods that worked and looked good, but were unusable for realtime applications. The mathematical theories existed but the hardware couldn't do anything with them. It's akin to using machine learning, your new delight! Machine learning and neural nets are nothing new, but now that we have hardware fast enough to implement them across a whole range of applications, people are exploring them again. Neural nets weren't used to develop image upscaling in 1982 despite the theories existing, because it wasn't practically possible. Now it is, and we have an explosion of research into new fields machine learning can achieve, because the hardware now enables it, and not because the theories have suddenly been discovered/invented. If the hardware didn't exist to run these AI techniques a thousand-fold faster than previously, the research into them woudln't be happening. And now that GPUs are 30x faster than the best possible in 2007, devs and experimenters can revisit old ideas and explore new derivatives.
ASIC miners severely dwarf GPUs on BTC, there's no comparison on cost performance or watt performance. It started as CPU mining which quickly moved to GPU mining because it was more performant.
This makes sense in case the algorithm is solid enough, or there is a temporal requirement justifying FF.Today our strongest convolutional neural networks and Deep learning are driven by Fixed Function Tensor Cores.
There is Minecraft showing infinite bounces at a quality never seen before in realtime. There is Claybook with very fast RT. There is path traced Q2 running buggy but at 60fps on my non RTX GPU.I've not seen a lot of posts that show meaningful performance of ray tracing metrics on their alternatives.
You are wrong. There always was research on realtime GI, which is the final goal.Because all this time on PC we've had compute in 2007 since the release of DX11, no one tried it then. No one tried it through kepler, maxwell, pascal..., GCN 1-4. No one tried this.
My work is not magic, and i took inspiration from the works of those researchers, including many from NVs circle. I was in contact with some of them - nice guys and helpful. I have no PHD, but i consider myself as a researcher as well, kind of.It's both baffling and frustrating to read this. It is an unrealistic position held by those in romance of discovery of some 'magic' algorithm that hundreds of PHD and masters researchers have not found in the last 40 years.
I tried, over and over again (I think I tried it here, too, not 100% sure), but no-one cared, because RTX ON and everything else sucks
Is MineCraft graphisc the same as Battlefield 1? the polygonal complexity difference alone is staggering. Not to mention physics, dynamic lights, dynamic shadows, particles .. etc.There is Minecraft showing infinite bounces at a quality never seen before in realtime
Q2 is great, but i can do the same on PS4 hardware, even with infinite bounces and better handling of materials.
Yes and No. There were industries that did it because statistical probability/forecasting was their main focus. I'd probably say that the explosion of ML because storage is cheap, abundant and now processing power is leading to its usage everywhere especially in smaller niches where this type of AI would be unworthy of the ROI of investment back in the day.If not processing power, than data (storage and transfer rates). Either way, it's enabled by advances in technology, and not development of new theories. If not, why is ML exploding now rather than 20 years ago? "Because all this time on PC we've had the ability to use machine learning for a myriad of opportunities. No one tried it through Z80, 68000, x86, Pentium, Cell, x64, GPU compute. No one tried this." The theories had to wait for the tech to enable their exploration, no?
Well...ok if you're going to put it like that. Doesn't feel good to read that. Its not the type of response i was expecting.My work is not magic, and i took inspiration from the works of those researchers, including many from NVs circle. I was in contact with some of them - nice guys and helpful. I have no PHD, but i consider myself as a researcher as well, kind of.
As i am unable to proof my magic crap yet, we should not further argue. You win in any case so there is no need for frustration on your side. HW RT has landed and it will succeed. Even i will follow and help with that.
The question is: Which game has more realistic lighting? Minecraft has shown the most accurate lighting in this threat. It's the only one shown which has infinite bounces. (Q2 has just one.)Is MineCraft graphisc the same as Battlefield 1? the polygonal complexity difference alone is staggering. Not to mention physics, dynamic lights, dynamic shadows, particles .. etc.
I can only do the Q2 stuff, not BFV! Only triangle raytracing can show exact reflections of triangles, but i use discs to approximate stuff.Now why doesnt that suprise me Must be a conspiricy that were just not seeing bfv rt on it, or Quake 2?
I can't help but feel there is way too much romance for flexible programming here.
ASIC miners severely dwarf GPUs on BTC, there's no comparison on cost performance or watt performance. It started as CPU mining which quickly moved to GPU mining because it was more performant.
(...)
The cheaper cost of storage, the amount of data we capture and when CUDA which was released because they found Data Scientists repurposing pixel values in rasterization for compute values - did deep learning data science happen.
(...)
Today our strongest convolutional neural networks and Deep learning are driven by Fixed Function Tensor Cores. Even the Tegra X1s use 16bit Floats to try to accelerate neural networks best they can which you find in self driving cars and that was quickly trumped by 16bit tensors.
That's a chicken and egg issue really. Your'e going to select the hardware option that is going to hit the most cases and easiest to deploy even if it's not the most efficient or the best. But accelerating it is going to outperform the ones that aren't accelerated. Every company is going to have to take a leap of faith and hope theirs is going to win. If no one makes the leap of faith, we won't have this conversation at all.There is a fundamental difference between these examples and games.
The examples have a very specific algorithymic computation goal that is set in stone, and devs are looking for the fastest path to run that one algorythym as fast as they can.
Games are not like that. Game's goal is the best approximation of a certain look, but the algo to get there is always up for changing. Nothing is set in stone. It's a tight rope of ambitions and compromises. So fixed function HW is always a bit short sighted, because it focus on today's algorithmic solutions, which are not always (actually never are) tomorrow's problems. Now, I'm not against all forms of HW acceleration for current trends, I just think it's necessary to exercise caution on how specific and limiting such architectural choices are.
A typical Voodoo Graphics PCI expansion card consisted of a DAC, a frame buffer processor and a texture mapping unit, along with 4 MB of EDO DRAM. The RAM and graphics processors operated at 50 MHz. It provided only 3D acceleration and as such the computer also needed a traditional video controller for conventional 2D software. A pass-through VGA cable daisy-chained the video controller to the Voodoo, which was itself connected to the monitor. The method used to engage the Voodoo's output circuitry varied between cards, with some using mechanical relays while others utilized purely electronic components. The mechanical relays emitted an audible "clicking" sound when they engaged and disengaged.
Yes, i guess 7970 is almost twice PS4 GPU. I have 5870 (or 50) which is a bit larger as well, and i used it as reference. But now i target next gen, i'm too late. (And of course that's just a claim from a random internet stranger.)If Quake 2 RTX can run on ps4, i would be able to on the 7970 pc i have?
That's a chicken and egg issue really. Your'e going to select the hardware option that is going to hit the most cases and easiest to deploy even if it's not the most efficient or the best. But accelerating it is going to outperform the ones that aren't accelerated. Every company is going to have to take a leap of faith and hope theirs is going to win. If no one makes the leap of faith, we won't have this conversation at all.
But now i target next gen
Next gen means most likely Navi in PS5 and next XBox. It also beans i have to add sharp reflections - my blurry stuff is no longer good enough.Next gen as in PS4?