NVIDIA discussion [2024]

  • Thread starter Deleted member 2197
  • Start date
That’s true until the fear becomes investor revolt and then game theory might lead to the opposite result where everyone tries to make sure they spend slightly less than the others to keep investors content, and that cascades into significantly less spending over time. It’s not super likely but if AI advances don’t go fast enough in the next 6 months, I can see that becoming a very serious issue.

Short-term though, the Alphabet pullback is *literally* because investors are worried they (and others) are giving *too much* money to NVIDIA. So it’s fair to guess NVIDIA’s next quarter will likely be about blowout. NVIDIA only gives guidance one quarter ahead these days, and next quarter is the start of the Blackwell revenue, so *unless* Blackwell is delayed/unable to ramp fast enough which could always happen until it actually ships (there are rumours of liquid cooling issues etc. - but they seem solvable in time for now), that’s probably another blowout guidance as well, irrespective of any long-term trend.

In other words, NVIDIA’s long-term value is more questionable and very hard to predict, but the short-term panic feels similar to what happened before previous earnings… I don’t think the bubble is popping quite yet, and whether it’s definitely a bubble depends on how fast things advance, which frankly nobody knows for sure. Sam Altman and others taking the “scaling laws” and claiming they give us “scientific certainty” is absolutely laughable - but that doesn’t mean they won’t hold. We just don’t know yet.
 


While the competition continues to improve the performance and connectivity of their accelerators, Nvidia is building the software that enables AI adoption. I know of no competitor to NIMs, nor a competitor to Foundry. And of course, nobody has introduced a competitor to Transformer Engine nor TensorRT-LLM, both of which can deliver 2-4 times the performance of a GPU without these features.

As enterprises work to adapt and adopt custom models for their business and applications, Nvidia is providing an easy on ramp to become an AI-enabled enterprise.
 
Last edited by a moderator:
July 29, 2024
The new approach will be a universal runtime mode for all materials from multiple sources including real objects captured by artists, measurements, or generated from text prompts using Generative AI. These models will be scalable in various quality levels ranging from PC/Console gaming, Virtual Reality, and even Film Rendering.
...
The model will help to capture every single detail of the object to be rendered such as incredibly delicate details and visual intricacies such as dust, water spots, lighting, & even the rays cast by the blend of various light sources and colors. Traditionally, these models will be rendered using shading graphs which are not only costly for real-time rendering but also include complexities.
...
Tensor-core architecture introduced in modern-day graphics architecture also provides a step forward for these models and while currently limited to compute APIs, NVIDIA exposes Tensor-core acceleration to shaders such as modified open-source LLVN-based DirectX Shader compiler which adds custom intrinsics for low-level access, allowing them to generated Slang shared code efficiently.
...
Overall, the new NVIDIA Neural Materials models approach looks to redefine the way textures and objects will be rendered in real-time. With a 12-24x speedup, this will allow developers and content makers to generate materials and objects faster with ultra-realistic textures that also run fast on the latest hardware. We can't wait to see this approach leverage by upcoming games and apps.


Edit:

Nvidia Link:
 
Last edited by a moderator:
July 29, 2024



*Faster than offline rendering not built at all for efficiency. But a "medium model" taking 11ms just for BRDF rendering is crazy (actually 3ms for a "small" model at 1080p on what is probably a 4080 or something is crazy)

A universal BSDF model is the general direction of research, but part of using a neural net here is unneeded. The fine folk that developed The Callisto Protocol have the right idea, you can decompose most relevant BSDFs into a relatively small subset of 1d parameters per pixel that scale abritrarily, and then you set those depending on what you want the output to be.

Traditional BSDFs do a lot of physics approximation that end up being expensive and reducible to this small set of 1 dimensional parameters anyway. Why are you trying to simulate light transport within a single pixel so much when you could just precalculate it? Where a neural net could come in is taking your giant material slab represented by a huge graph, mapping that to your arbitrary set of 1d parameters (precalculate it), and suddenly you only have to evaluate your outputs, the set of highly optimized 1d parameters of a BSDF, but you used a neural net to pre-train what all these outputs are based on complex inputs. In fact you could skip the "graph" part and bsdf approximation at all and just do full pathtracing of microscal geometry if you really wanted. Either way it's the same runtime cost.

That's basically what The Callisto Protocol did, but better than brute force rendering. They mapped their actual high res scan data to their simple to evaluate brdf parameters, then generated an extra "fixup" pass that just tried to add back in anything that didn't map to their BRDF, forget 3ms for 1080p on a 4080 or whatever, they managed photo reference grade stuff as good as some of the best Hollywood can do in on a Series S. You can find the paper on the bottom here: https://advances.realtimerendering.com/s2023/index.html
 
Not quite, it doesn't take into account around 10 % marked as "Other"
All iGPUs are filtered out of the chart.

But a "medium model" taking 11ms just for BRDF rendering is crazy (actually 3ms for a "small" model at 1080p on what is probably a 4080 or something is crazy)
Just 3ms with Path Tracing, the 11ms is for a very high sample count, both are very good for a path traced rendering.
 
They mapped their actual high res scan data to their simple to evaluate brdf parameters, then generated an extra "fixup" pass that just tried to add back in anything that didn't map to their BRDF, forget 3ms for 1080p on a 4080 or whatever, they managed photo reference grade stuff as good as some of the best Hollywood can do in on a Series S. You can find the paper on the bottom here:
The problem with that approach is that it required a 335 page presentation, a few years of research, and a few extremely talented industry veterans to bring it to life, and this was essentially for just a single type of material.

Comparing something like this to neural networks is like comparing extremely expensive handmade watches to mass produced ones. For neural networks, achieving a typical execution time of 1 ms for BRDFs may be just a few hardware tweaks away and could likely be feasible within a generation or two. In contrast, the approach you suggest is not scalable at all to a broad range of materials and would never be feasible due to its labor intensive nature.

Regarding the "fixup" pass, it simply encodes the delta between the target offline rendering and the realtime rendering into a texture, then applies the texture in the realtime BRDF for correction. This is similar to baking an ambient occlusion into a texture, which can never be accurate across a myriad of lighting and viewing configurations because you can realistically only bake the view invariant elements, such as diffuse, at fixed lighting configurations. You also need similar corrections for the specular response and likely for subsurface scattering, but you can't bake those into textures because they are either view dependent, dependent on lighting configurations, or both. With neural networks, you don't need all these adjustments, it would all be handled automatically during the network training process through the use of loss functions.
 
Apple decided on the path of least resistance as they plan to license Google's Gemini AI engine for the iPhone.
 
Last edited by a moderator:
A universal BSDF model is the general direction of research, but part of using a neural net here is unneeded. The fine folk that developed The Callisto Protocol have the right idea, you can decompose most relevant BSDFs into a relatively small subset of 1d parameters per pixel that scale abritrarily, and then you set those depending on what you want the output to be.

Traditional BSDFs do a lot of physics approximation that end up being expensive and reducible to this small set of 1 dimensional parameters anyway. Why are you trying to simulate light transport within a single pixel so much when you could just precalculate it? Where a neural net could come in is taking your giant material slab represented by a huge graph, mapping that to your arbitrary set of 1d parameters (precalculate it), and suddenly you only have to evaluate your outputs, the set of highly optimized 1d parameters of a BSDF, but you used a neural net to pre-train what all these outputs are based on complex inputs. In fact you could skip the "graph" part and bsdf approximation at all and just do full pathtracing of microscal geometry if you really wanted. Either way it's the same runtime cost.
That paper highlights a growing problem in the neural rendering field where researchers with very little backgrounds in computer graphics or optical physics are being drawn in to produce these papers or even if they do have such experience, they go about ignoring prior art in those concentrations ...

I don't see how any graphics engineers in the offline rendering industry will seriously entertain the idea of having no energy conservation or reciprocity in their material shading models but it's a scary thought how the industry of real-time graphics might actually pick up on these harmful results by discarding more established and accurate theories of physically based rendering in favour of infecting the industry with inferior non-physically based neural rendering ...
 

NVIDIA is considering IFS as a key ally for its packaging needs, and interestingly, Intel will start supplying the tech giant as soon as next month.
...
It is said that the IFS plans to supply NVIDIA with 5,000 packaging wafers per month, which is significantly larger than what competitors such as TSMC are providing, but there is room for expansion. IFS might be given responsibility for NVIDIA's "in-demand" Hopper generation of AI products, including accelerators such as the H100s. In terms of the advanced packaging to be supplied, NVIDIA is said to show massive interest in Intel's Foveros 3D stacking technology, which is said to be a direct competitor of TSMC's mainstream CoWoS-S packaging process.
 

Would IFS package silicon from TSMC though? I don't think TSMC does packaging for non-TSMC silicon. Not to mention that their packaging technologies are very different and then Nvidia would have to revalidate the parts. Doesn't exactly make sense when TSMC is said to be expanding packaging capacity significantly through 2025.
 
Would IFS package silicon from TSMC though? I don't think TSMC does packaging for non-TSMC silicon. Not to mention that their packaging technologies are very different and then Nvidia would have to revalidate the parts. Doesn't exactly make sense when TSMC is said to be expanding packaging capacity significantly through 2025.
Yea, they're completely different things. Not something you can just 'dual source'.
 
Would IFS package silicon from TSMC though? I don't think TSMC does packaging for non-TSMC silicon. Not to mention that their packaging technologies are very different and then Nvidia would have to revalidate the parts. Doesn't exactly make sense when TSMC is said to be expanding packaging capacity significantly through 2025.
Who do you think packages Intel chips with TSMC tiles?
 
Would IFS package silicon from TSMC though? I don't think TSMC does packaging for non-TSMC silicon. Not to mention that their packaging technologies are very different and then Nvidia would have to revalidate the parts. Doesn't exactly make sense when TSMC is said to be expanding packaging capacity significantly through 2025.

It's plausible. Intel is actually ahead on packaging (at least tech wise if not customer service wise entirely) vs TSMC. I'd imagine AMD would consider them if it didn't mean saving their rival for servers/laptops/etc.

Certainly Intel seems to have made it work, I'm unsure of how well. But Nvidia certainly has the cash to light on fire to make it work for themselves as well.
 
Back
Top