NVIDIA discussion [2024]

  • Thread starter Deleted member 2197
  • Start date
if an alien civilisation came to us 30 years ago with the software for AGI, we nearly certainly couldn't compute it at sufficient scale. We absolutely definitely could today.

Or we’re underestimating the flops required to generate actual intelligence.
 
Or we’re underestimating the flops required to generate actual intelligence.

Simulate. Not generate.

The human brain doesn't have a FLOPs rating, it is not a Von Neumann architecture.

Personally I've not heard a convincing explanation of where emergent intelligence in humans comes from. The debates about the potential of AI going back to the 1950s have tended to end up stalled by the basic question of what is intelligence anyway, why do humans have it, why do most other living beings seemingly not.

Given that any AI is going to be a stab in the dark. Maybe some company will stumble across emergent AI on current architectures, but it hasn't happened yet and neither has the atomic-powered flying car that I was promised.
 
And masively off topic.

Agreed this conversation has probably developed into something worthy of it's own thread now. That said I don't think the original comment was particularly off topic. This is simply "the Nvidia Discussion", so discussing the societal implications of Nvidia's primary business model seems pretty relevant. Also, if you feel it's particularly off topic to the point you need to call it out, why follow up the call out with a post on the same "off topic"?

No, it's not "surely that these things will at some point become sentient". Even you then water it down by saying it sounds like science fiction; language which shows that you might be unsure yourself.

Of course I'm unsure. I can't be certain of something that hasn't yet happened, obviously. That said, unless there is literally some supernatural aspect to sentience that we can't recreate in the physical world then it seems fairly inevitable (assuming we continue on a course of technological progress) that we will at some point be able to recreate sentience artificially. The only real question is whether our current efforts will be sufficient to do so. And when you consider the fact that we are now looking at the prospect of neural networks which have orders of magnitude (by our best estimates) more processing power than the human brain coupled with near instant recall access to enormous swathes of information far, far beyond the capacity of human memory, I'd say it's a pretty fair statement to say that there is a reasonably good chance that will happen in the near to medium future - powered by Nvidia hardware - just to stay on topic ;)

Actually the burden of proof would be on those claiming they would become "sentient".

I'm not really sure what you're trying to say with this. No-one is trying to "prove" before it happens that AI is going to become sentient. The conversation is about the risk of that happening and what we should, or shouldn't be doing to mitigate that risk.

Regardless, we'd first need to worry about the same bad actors being able to do more harm with the help of AI (a new tool that obviosly can be use to cause harm too). Long before awaken AIs will "decide" to wipe us out

Absolutely agree with this, and that is of course another concern with the technology, but one that warrants the West creating ever more powerful AI's, which of course circles back to the initial concern.

Simulate. Not generate.

When it comes to creating sentience, how do we know this is of any relevance though. As you say...

Personally I've not heard a convincing explanation of where emergent intelligence in humans comes from. The debates about the potential of AI going back to the 1950s have tended to end up stalled by the basic question of what is intelligence anyway, why do humans have it, why do most other living beings seemingly not.

I would argue that many creatures have both sentience and intelligence although obviously at a lesser capacity than humans for at least the latter if not the former. But given we have literally no idea how sentience forms, how do we know our AI models are less likely to create it than our own brains? For all we know, the artificial model is more likely to result in sentience than the the natural one. As far as we understand, it has more processing power and is trained on a larger amount of "sentient generated data" than the average human brain at whatever point we consider it to have achieved sentience.

The human brain doesn't have a FLOPs rating, it is not a Von Neumann architecture.

Estimates have certainly been made based on the best available data, and even the highest of those estimates are far below the current state of the art super computers, let alone those planned with orders of magnitude more processing capability.

Given that any AI is going to be a stab in the dark. Maybe some company will stumble across emergent AI on current architectures, but it hasn't happened yet and neither has the atomic-powered flying car that I was promised.

I'm not sure those two examples are really comparable though given the current state of the technology and economics behind both concepts ;)
 
Last edited:
I'm not really sure what you're trying to say with this. No-one is trying to "prove" before it happens that AI is going to become sentient. The conversation is about the risk of that happening and what we should, or shouldn't be doing to mitigate that risk.
"Burden of proof" in that proof or reasoning that there is a risk significant in any way (compared for example with chances of asteroids hitting Earth and killing people).

Something beyond feelings and movie references. It's just an imaginary threat otherwise
 
"Burden of proof" in that proof or reasoning that there is a risk significant in any way (compared for example with chances of asteroids hitting Earth and killing people).

Something beyond feelings and movie references. It's just an imaginary threat otherwise

The 'reasoning' that there is a risk is pretty obvious, and I'm far from the only person to raise it, including people who work in the industry itself.

If I really need to spell it out, then we are deliberately creating intelligent machines with processing capacities that greatly exceed the human brain as far as we're able to determine. We don't understand how sentience emerges but we do see a fairly clear link in nature between intelligence and sentience.

So the reasoning that there is a risk seems pretty solid and I would suggest that the burden on proof sits with you in making what is IMO the far bolder claim that there is no risk at all.
 
We don’t need to get all the way to sentience for there to be risk. There’s plenty of risk already in any software ecosystem. AI just adds another layer of risk and complexity on top, particularly given our collective enthusiasm to delegate control and decision making to the machines.
 
The 'reasoning' that there is a risk is pretty obvious, and I'm far from the only person to raise it, including people who work in the industry itself.

If I really need to spell it out, then we are deliberately creating intelligent machines with processing capacities that greatly exceed the human brain as far as we're able to determine. We don't understand how sentience emerges but we do see a fairly clear link in nature between intelligence and sentience.

So the reasoning that there is a risk seems pretty solid and I would suggest that the burden on proof sits with you in making what is IMO the far bolder claim that there is no risk at all.
It's impossible to prove that something doesn't exist and consequently, the risk of *anything* is greater than 0.

What i'm saying is that if one can't (at the very least) model this "evil inteligence emerge" phenomenon, assigning a risk value is random gueswork. ( If anyone has an actual model, then sure that could be analyzed , accepted or rejected. )

eg : Why the risk emergence of evil AI merits more work and attention than say, the risk of the Christian Armaggedon starting? Because.. feelings, no science, no numbers.
I'd say that even the use of the term "risk" is inapropriate for AI scientence. As "risk" would imply that we can in some way quantify it - which is misleading.
 
I agree with "risk" statements being misleading. Before risk can be evaluated, we need to start with a hazard and then talk about mitigating factors. For example:

The hazard is a severe rain storm coming which can cause flooding of my home. Do I load up my precious belongings and flee?

The mitigating factors include the higher relative elevation of my house versus the surrounding areas, significant drainage upgrades to my yard, and multiple rainy days in the recent history which have allowed the ground to be more accepting of moisture (versus very dry weather, which causes the ground to harden and actually become more resilient to soaking up rain.) We also do not have crawl space or a basement; our house is built on a four inch floated slab.

The risk is quite low, as there exist sufficient mitigating factors to reasonably avoid disaster as a result of the incoming storm. The risk is always non-zero, but I'm comfortable with where we are.

The same applies for any sort of emergent sentience in AI. So much of what AI currently "is" relates to how it has interpreted humans to behave through works of fiction and nonfiction we have fed it via training models, pruining, and distillation. The ones we mostly think about are Large Language Models, which can do a good job passing the turing test because they're regurgitating the aggregation of millions upon millions of human literary sources. They know how to mimic us very convincingly! However when you ask an LLM AI a question, ultimately it's answering based on what the data model points to as a human response.

The hazard is real, yet I'm not convinced it's a great hazard. And even if the hazard comes to fruition, there are stacks upon stacks of mitigating factors. It's like the XKCD comic of the great robot uprising, where the Roomba comes after stickman and then gets hosed down by the faucet sprayer and dies.

In closing, let's go back to the hazard statement: How do we measure purported consciousness of an entity or device specifically and meticulously crafted to emulate it? And then, what controls are we giving these AI to reach out into the world?
 
Last edited:
It's impossible to prove that something doesn't exist and consequently, the risk of *anything* is greater than 0.

What i'm saying is that if one can't (at the very least) model this "evil inteligence emerge" phenomenon, assigning a risk value is random gueswork. ( If anyone has an actual model, then sure that could be analyzed , accepted or rejected. )

eg : Why the risk emergence of evil AI merits more work and attention than say, the risk of the Christian Armaggedon starting? Because.. feelings, no science, no numbers.
I'd say that even the use of the term "risk" is inapropriate for AI scientence. As "risk" would imply that we can in some way quantify it - which is misleading.
This perceived risk of "evil intelligence" is just feeding our egotistic desire to play god (perhaps because of how powerless we feel otherwise).

Let's not distract ourselves from the real challenges. Explainability. Software complexity. A loss of confidence in any media as evidence of truth. And a general dumbing down of humanity as a larger section of society can afford to stop exercising our brains.
 
April 1, 2024

And there is no talk of what technology the Stargate system will be based upon, but we would bet Satya Nadella’s last dollar that it will not be based on Nvidia GPUs and interconnects. And we would be our own money that it will be based on future generations of Cobalt Arm server processors and Maia XPUs, with Ethernet scaling to hundreds of thousands to 1 million XPUs in a single machine. We also think that Microsoft bought the carcass of DPU maker Fungible to create a scalable Ethernet network and is possibly having Pradeep Sindhu, who founded Juniper Networks and Fungible, create a matching Ethernet switch ASIC so Microsoft can control its entire hardware stack. But that is just a hunch and a bunch of chatter at this point.

We also think it would be dubious to assume that this XPU will be something as powerful as the future Nvidia X100/X200 GPU or its successor, which we do not know the name of. It is far more likely that Microsoft and OpenAI will try to massively scale networks of less expensive devices and radically lower the overall cost of AI training and inference.

Their business models depend upon this happening.

And it is also reasonable to assume that at some point Nvidia will have to create an XPU that is just jam packed with matrix math units and lose the vector and shader units that gave the company its start in datacenter compute. If Microsoft builds a better mousetrap for OpenAI, Nvidia will have to follow suit.

Stargate represents a step function in AI spending for sure, and perhaps two step functions depending on how you want to interpret the data.
...
The thing to remember is that Microsoft can keep the current generation of GPU or XPU for OpenAI’s internal use – and therefore its own – and sell the N-1 and N-2 generations to users for many years to come, very likely getting a lot of its investment bait back to fish with OpenAI again. So those investments are not sunk costs, per se. It is more like a car dealer driving around a whole bunch of different cars with dealer plates on them but not running the mileage up too high before selling them.

The question is this: Will Microsoft keep investing vast sums in OpenAI so it can turn around and rent this capacity, or will it just stop screwing around and spend $100 billion on OpenAI (the company had a valuation of $80 billion two months ago) and another $110 billion or so more on infrastructure to have complete control of its AI stack.

Even for Microsoft, those are some pretty big numbers. But, as we said, if you look at it between 2024 and 2028, Microsoft has around $500 billion in probably net income to play with. Very few other companies do.
 
Last edited by a moderator:
April 1, 2024
H100 is already pretty much a bundle of matrix multiply units though. They can strip out the vector and high precision stuff - and Blackwell is already doing the latter - but just how much of a SM's floor plan is currently dedicated to them? I don't think it's a lot.

It doesn't appear to be hurting their competitiveness either. Google only submitted gptj-99 to mlperf 4, so I assume that's their best foot forward, yet L40S is ~5x faster than TPUv5e in that. And that's vastly more general purpose hardware than H100.

The main incentive for Microsoft/OpenAI - and Google before them - isn't going to be a more dedicated ASIC, but nvidia's margins. You can deliver much worse PPA but still be cheaper than paying 80% margins.
 
I think Google's original incentive for TPUs was to be a more dedicated ASIC... back when they were designed in 2013(!) and used in production in 2015(!) which is 2++ years before Volta! But the amount of "dedicated AI silicon" has increased significantly every single generation, and I agree completely that the PPA benefit of a XPU-style device is probably quite small compared to Blackwell (especially in terms of power consumption).

Now if you look at what an actual datacenter inference accelerator that is competitve on MLPerf inference power efficiency looks like, it's far from being a giant sea of matrix multiply units without programmability: https://hc33.hotchips.org/assets/program/conference/day2/Corrected Qualcomm HotChips 2021 Final_v2.pdf (Page 5 & 6) - and even then, they didn't have a scalability story and didn't have enough memory capacity, so they were not suitable for big LLMs until the newer Cloud AI 100 Ultra (which again won't be suitable for next-gen huge LLMs). NVLink and Mellanox are a big part of NVIDIA's current advantage (but they don't have any moat there - others just need to catch up, and they will).

Lest we forget, there were many special-purpose architectures especially for mobile/edge devices like Imagination's PowerVR NNA (and NVIDIA's DLA which is still supported in Orin but at a fraction of the flops of the GPU) which were great at Convolutional Neural Networks but inefficient for Transformers and other emerging use cases. Needless to say, many are dead now, while Jensen is at the top of the r/wallstreetbets banner somehow...

That doesn't mean OpenAI/MS won't decide to avoid NVIDIA chips for other reasons (e.g. margins and API control) in the future for Stargate or other projects, although if I was bidding for a $50B contract, I'd probably accept 50% gross margins instead of my usual 80% gross margins if that means winning the deal... and if MS can get a ~50% discount from NVIDIA and save $25B by threatening to use their own chips, that makes spending a few billion on their own chip R&D extremely profitable even if they never sell in the same volume. The same thing is probably already happening to a much lesser extent with Maia 100 which isn't super competitive but an effective threat nonetheless.

But if MS manages to beat NVIDIA's PPA and SW ecosystem then it's game over, obviously.
 
The RTX 3060 desktop + laptop cements it's position as the leader on top of the Steam charts for more than 6 months now, with a percentage of 10%, followed by the 2060, 1650, 3060Ti and 3070.

Was the marketshare always this tilted to NVIDIA? I knew they were much bigger but this is absurd. The top 15 are all NVIDIA. There are no AMD dGPUs until the RX580 which is around the level of the RTX4090. WTF.
 
Was the marketshare always this tilted to NVIDIA? I knew they were much bigger but this is absurd. The top 15 are all NVIDIA. There are no AMD dGPUs until the RX580 which is around the level of the RTX4090. WTF.
Yeah, this isn't anything new. A snapshot here from a few years ago looks pretty similar:


I've been meaning to grab the raw data someone's helpfully scraped below and put it through some modern visualization tools, but haven't managed to bump it up on my list of projects yet:

 
Yeah, this isn't anything new. A snapshot here from a few years ago looks pretty similar

It’s been like this for as long as I can remember. Nvidia has always had the top 10+ entries.
Well, not exactly. Before Kepler AMD got pretty decent representation in the survey with popular cards like the 9800, 9700, X800, HD 48xx, HD 58xx, 57xx, etc, all appearing in the top 10 and even top 5.

However, after Kepler, and especially Maxwell, NVIDIA pretty much cleaned house in the top 15 category.

Here is a video showing top 15 till 2019.


What also changed are the percentages, AMD shares are steadily shrinking.

For example, ever since 2018, and AMD shares in RX 5000, 6000 and 7000 family of GPUs combined is about 14%, vs 86% for GTX 16, RTX 20, 30 and 40 GPUs. In the past AMD used to have a higher share than this (~20%) for their recent archetictures.

 
What kind of happened starting with Kepler and that timeframe is that both the market and Nvidia pivoted towards laptops (and especially laptops) and OEMs even for enthuasist gaming GPUs.

This is kind of why there is sometimes a bit of a disconnect between the impression you get from enthuasist commentary, which primarily buy at retail for DIY, and the actual bulk of the market going into laptops/prebuilts and all the other different consideratiosn and conditions involved. Not to mention the regional variance as well and what regions online discussions primarily center around.
 
Back
Top