Agreed this conversation has probably developed into something worthy of it's own thread now. That said I don't think the original comment was particularly off topic. This is simply "the Nvidia Discussion", so discussing the societal implications of Nvidia's primary business model seems pretty relevant. Also, if you feel it's particularly off topic to the point you need to call it out, why follow up the call out with a post on the same "off topic"?
No, it's not "surely that these things will at some point become sentient". Even you then water it down by saying it sounds like science fiction; language which shows that you might be unsure yourself.
Of course I'm unsure. I can't be certain of something that hasn't yet happened, obviously. That said, unless there is literally some supernatural aspect to sentience that we can't recreate in the physical world then it seems fairly inevitable (assuming we continue on a course of technological progress) that we will at some point be able to recreate sentience artificially. The only real question is whether our current efforts will be sufficient to do so. And when you consider the fact that we are now looking at the prospect of neural networks which have orders of magnitude (by our best estimates) more processing power than the human brain coupled with near instant recall access to enormous swathes of information far, far beyond the capacity of human memory, I'd say it's a pretty fair statement to say that there is a reasonably good chance that will happen in the near to medium future - powered by Nvidia hardware - just to stay on topic
Actually the burden of proof would be on those claiming they would become "sentient".
I'm not really sure what you're trying to say with this. No-one is trying to "prove" before it happens that AI is going to become sentient. The conversation is about the risk of that happening and what we should, or shouldn't be doing to mitigate that risk.
Regardless, we'd first need to worry about the same bad actors being able to do more harm with the help of AI (a new tool that obviosly can be use to cause harm too). Long before awaken AIs will "decide" to wipe us out
Absolutely agree with this, and that is of course another concern with the technology, but one that warrants the West creating ever more powerful AI's, which of course circles back to the initial concern.
When it comes to creating sentience, how do we know this is of any relevance though. As you say...
Personally I've not heard a convincing explanation of where emergent intelligence in humans comes from. The debates about the potential of AI going back to the 1950s have tended to end up stalled by the basic question of what is intelligence anyway, why do humans have it, why do most other living beings seemingly not.
I would argue that many creatures have both sentience and intelligence although obviously at a lesser capacity than humans for at least the latter if not the former. But given we have literally no idea how sentience forms, how do we know our AI models are less likely to create it than our own brains? For all we know, the artificial model is more likely to result in sentience than the the natural one. As far as we understand, it has more processing power and is trained on a larger amount of "sentient generated data" than the average human brain at whatever point we consider it to have achieved sentience.
The human brain doesn't have a FLOPs rating, it is not a Von Neumann architecture.
Estimates have certainly been made based on the best available data, and even the highest of those estimates are far below the current state of the art super computers, let alone those planned with orders of magnitude more processing capability.
Given that any AI is going to be a stab in the dark. Maybe some company will stumble across emergent AI on current architectures, but it hasn't happened yet and neither has the atomic-powered flying car that I was promised.
I'm not sure those two examples are really comparable though given the current state of the technology and economics behind both concepts