Market behaviour and pricing strategies for consumer GPUs

Something that is 95% as good for a fraction of the cost is basically catnip for investors.

Who is having LLMs solve PHD level problems? These models can’t think, they regurgitate information they’ve been trained on.
Everyone doing the research into LLMs actually, that's sort of the race right now. The race is towards 'reasoning'.

Perhaps another way to look at LLMs is that we're AI to understand language. Right now, that language is English. So as we teach it English, and it understands English better, we can provide it the knowledge base needed for reasoning. So a simple example might be, here's the rule book for a boardgame. If the rules are written well enough, we would expect that the LLM would be able to reference this rulebook and others, and perhaps appendices, to be able to official rules on the boardgame without assistance.

And that should also mean, based on it's experience in the knowledge reading, it would be able to infer, exception cases as well.

You can draw a straight line from there, to apply it to Law.

And from there, you can start to apply it to PHD level problems. If we provide AI the knowledge base to answer certain questions, how long it would it take to come up with the same response as a PHD person would researching the same topics. And if it takes for instance, 2 years, for a PHD person to read through all of that knowledge, and come up with an experiement to solve a problem. It could take LLMs say a week, or less to rifle through all that knowledge and be able to come up with the same experiments to test.

So one could draw a line from that, and say, okay, what if we give it all the knowledge, from all the fields, related or not, and with access to all the worlds knowledge and experiements, could it now solve or at least come up with some potential solutions or provide breakthroughs in our thinking, to solve issues like

climate change,
fusion reactors
materials engineering

etc.

so as long as we continue to improve LLMs ability to understand language, context, and math, etc, then we have a way now to process huge amounts of data into a single entity, absorb and understand the specialities that normally we hire each individual specialist to do, and now it' can use all of these connected knowledge to develop potentially solutions we never knew were connected.

We are now moving to the world of AI, that's why I stopped Googling. you'll never get the answers you're really looking for. But Perplexity is probably the best web based AI. It's fast an useful.

*****

can chatgpt o1 solve PHD level problems​

******

Yes, ChatGPT o1-preview has demonstrated the ability to solve PhD-level problems across various disciplines, particularly in physics, chemistry, and biology[1][2]. In a notable test, physicist Kyle Kabasares challenged the model with complex problems from Jackson's Classical Electrodynamics, a notoriously difficult textbook used in advanced physics courses[1].

The o1-preview model showed impressive capabilities:

1. It solved a problem from Jackson's textbook in just under two minutes, a task that typically takes graduate students about a week and a half[1].

2. The model tackled a complex two-part problem involving electromagnetic wave scattering, demonstrating human-like reasoning and problem-solving skills[1].

3. OpenAI reports that the o1 models perform similarly to PhD students when working on physics, chemistry, and biology problems[2].

4. The o1 models scored 83% on a qualifying exam for the International Mathematics Olympiad, a significant improvement over the previous GPT-4o model, which only solved 13% of problems correctly[2].

It's important to note that while the o1-preview model has shown remarkable abilities, it is still in its early stages. OpenAI is taking a cautious approach to releasing these models, with limited access currently available to ChatGPT Plus or Teams account holders[2].

Citations:
[1] https://www.webpronews.com/chatgpt-...ics-has-ai-mastered-advanced-problem-solving/
[2] https://www.fastcompany.com/91189817/openais-new-o1-models-push-ai-to-phd-level-intelligence
[3] [4]
[5]
[6] https://www.linkedin.com/posts/kaib...vel-physics-activity-7240719983578664961-d31k
[7] https://community.openai.com/t/open...to-have-super-iq-phd-level-competence/1095182
[8] https://www.linkedin.com/posts/robi...my-phd-code-activity-7241816397138272256-ez7S
 
Something that is 95% as good for a fraction of the cost is basically catnip for investors.
Agreed.

Who is having LLMs solve PHD level problems?
*Everyone* working on the next frontier of AI, as @iroboto stated.

These models can’t think, they regurgitate information they’ve been trained on.
No, they learn the intrinsic foundational patterns and rules that are hidden in the training data. They're not just memorizing the training information -- they are distilling them down into that core foundation. That's what enables them to generate completely new outcomes. The more recent chain-of-thought models even perform introspection and correct themselves as they are generating the data during inference time, not during training time.
 
I still think Deepseek is neat and has a novel approach and some clever software engineering. Looking forward to 3rd party testing results which there will be a plethora of.
Its going to be likely used the most amongst the many that are out there, unless something comes out that is cheaper or more effective for the same price point.
Until you hit a problem that DeepSeek can't solve sufficiently for you, you will always use DeepSeek. That just makes sense.
 
Its going to be likely used the most amongst the many that are out there, unless something comes out that is cheaper or more effective for the same price point.
Until you hit a problem that DeepSeek can't solve sufficiently for you, you will always use DeepSeek. That just makes sense.
I just think the big players in AI are currently studying deepseek's code and will steal any tricks to help with their programs.
 
Everyone doing the research into LLMs actually, that's sort of the race right now. The race is towards 'reasoning'.

Perhaps another way to look at LLMs is that we're AI to understand language. Right now, that language is English. So as we teach it English, and it understands English better, we can provide it the knowledge base needed for reasoning. So a simple example might be, here's the rule book for a boardgame. If the rules are written well enough, we would expect that the LLM would be able to reference this rulebook and others, and perhaps appendices, to be able to official rules on the boardgame without assistance.

And that should also mean, based on it's experience in the knowledge reading, it would be able to infer, exception cases as well.

You can draw a straight line from there, to apply it to Law.

And from there, you can start to apply it to PHD level problems. If we provide AI the knowledge base to answer certain questions, how long it would it take to come up with the same response as a PHD person would researching the same topics. And if it takes for instance, 2 years, for a PHD person to read through all of that knowledge, and come up with an experiement to solve a problem. It could take LLMs say a week, or less to rifle through all that knowledge and be able to come up with the same experiments to test.

So one could draw a line from that, and say, okay, what if we give it all the knowledge, from all the fields, related or not, and with access to all the worlds knowledge and experiements, could it now solve or at least come up with some potential solutions or provide breakthroughs in our thinking, to solve issues like

climate change,
fusion reactors
materials engineering

etc.

so as long as we continue to improve LLMs ability to understand language, context, and math, etc, then we have a way now to process huge amounts of data into a single entity, absorb and understand the specialities that normally we hire each individual specialist to do, and now it' can use all of these connected knowledge to develop potentially solutions we never knew were connected.

We are now moving to the world of AI, that's why I stopped Googling. you'll never get the answers you're really looking for. But Perplexity is probably the best web based AI. It's fast an useful.

*****

can chatgpt o1 solve PHD level problems​

******

Yes, ChatGPT o1-preview has demonstrated the ability to solve PhD-level problems across various disciplines, particularly in physics, chemistry, and biology[1][2]. In a notable test, physicist Kyle Kabasares challenged the model with complex problems from Jackson's Classical Electrodynamics, a notoriously difficult textbook used in advanced physics courses[1].

The o1-preview model showed impressive capabilities:

1. It solved a problem from Jackson's textbook in just under two minutes, a task that typically takes graduate students about a week and a half[1].

2. The model tackled a complex two-part problem involving electromagnetic wave scattering, demonstrating human-like reasoning and problem-solving skills[1].

3. OpenAI reports that the o1 models perform similarly to PhD students when working on physics, chemistry, and biology problems[2].

4. The o1 models scored 83% on a qualifying exam for the International Mathematics Olympiad, a significant improvement over the previous GPT-4o model, which only solved 13% of problems correctly[2].

It's important to note that while the o1-preview model has shown remarkable abilities, it is still in its early stages. OpenAI is taking a cautious approach to releasing these models, with limited access currently available to ChatGPT Plus or Teams account holders[2].

Citations:
[1] https://www.webpronews.com/chatgpt-...ics-has-ai-mastered-advanced-problem-solving/
[2] https://www.fastcompany.com/91189817/openais-new-o1-models-push-ai-to-phd-level-intelligence
[3] [4]
[5]
[6] https://www.linkedin.com/posts/kaib...vel-physics-activity-7240719983578664961-d31k
[7] https://community.openai.com/t/open...to-have-super-iq-phd-level-competence/1095182
[8] https://www.linkedin.com/posts/robi...my-phd-code-activity-7241816397138272256-ez7S
The fact your citations are mostly Reddit posts and LinkedIn nonsense (and the fact many in this thread liked it lol) is very telling on how all of this is just hype.

An LLM cannot reason. It is fundamentally unable to reason. All the ‘research’ they are doing into this is just training these models on more and more data and getting the same results.
 
The fact your citations are mostly Reddit posts and LinkedIn nonsense (and the fact many in this thread liked it lol) is very telling on how all of this is just hype.

An LLM cannot reason. It is fundamentally unable to reason. All the ‘research’ they are doing into this is just training these models on more and more data and getting the same results.
We are all welcome to choose what you want to believe. it is not my position to want to convince you of what people are doing with billions of dollars. If you feel AI is just hype, you're also welcome to believe that, but if you want a discussion to hear what could be happening and what people believe the potential to be with these technologies, we're just provide answers.

Personally I spend a lot of time in this field, I use this for work. I think its the future, I think it's where we are headed. You will leverage AI like how we leverage MS Excel in most of our jobs. It's not hard to believe a future where people have AI assistants helping them. And it's okay to agree to disagree.
 
Quote me where I called this the “harbinger of the end”.

I don’t think we’re skimming the surface of anything because I think most of this tech is nonsense and most of the leaders in this space are bullshit artists, like Saltman.

Fair enough. AI skepticism is a valid stance to take.
 
One of tricks used by Deepseek is skipping CUDA and using NVIDIAs assembly-like PTX instead, which allows more fine grained optimizations.
It was out necessity, apparently many of the Hopper GPUs they have were of the H800 and H20 variety (with reduced interconnect capabilities), they programmed 20SMs from each GPU to handle networking and communication, this couldn't be done through CUDA, only through PTX.
 
This is not a political discussion on China or AI. There is no politics in B3D - RPSC talk belongs in the rest of the internet.

Please self moderate.


Also, this thread is about consumer GPU pricing, not AI as a technology. Any talk about evolving AI tech needs to be focussed on its impact on GPU consumer-behaviour and pricing, such as "this new AI platform will drive up interest in cheaper GPUs and increase sales" or "interest in high-end GPUs for AI will be reduced resulting in a trickle-down price reduction for consumers".
 
Last edited:
Back
Top