Will there be 300W Discrete GPUs in 5 years? 10?

The only suitable memory for a gpu would be gddr solutions or stacked memory such as hybrid memory cube or high bandwidth memory. Gddr5 would have to be soldered to the motherboard, not sure about stacked memory.
Time for the return of CPU Slots?
Slot-A_Athlon.jpg


People have been pronouncing the death of discrete GPUs for like 10years.
I am pretty sure that PC gamers will continue to demand more powerful GPUs & that need will be filled.
Not necessarily by discrete GPUs but definitely a separate Gaming focused product.
 
That's what I'm saying. dGPUs will be with us for many years for the same reasons we have them today.

That's assuming that they are profitable. The low end GPU market is already gone. The mainstream market will be the next to go. What will be left is the high end, and it is questionable if there will be enough volume to justify R&D at least at current levels.
 
Stopped reading there. VR the great technology that's always right around the corner... I have a sneaking suspicion that the VR penetration numbers will make the 3D TV penetration numbers look excellent!

Not very difficult given that most tv's come with 3D whether you like it or not. But it's one thing to get a 3D capable tv into somebody's living room, its another to get them to buy 3D content and that is where VR won't have a problem. The people who will buy VR in the next couple of years are the people who continuously spend money on keeping their hardware up to date. Those are the same people who I think are generating a large part of pc gaming revenue. The majority of people on steam might have low end hardware but I'm sure that if we could see valve's number you'd see that those people are hardly spending time or money on games while people with high end hardware will probably be buying multiple games a year.
 
People have been pronouncing the death of discrete GPUs for like 10years.

Blast from the past.

Charlie stating that GPUs would not exist at all in one months time.

05-11-2010, 04:15 PM

I would ask the question in a more general sense. Will GPUs exist in 5 years. The answer there would be no.

The low end dies this year, or at least starts to do a PeeWee Herman at the end of Buffy the Vampire Slayer (the movie, not the show). There goes the volume. 2012 sees the same happening for the high end. The middle isn't enough to sustain NV.

They have 2 years to make compute and widgets profitable. Good luck there guys.

-Charlie

http://www.semiaccurate.com/forums/showpost.php?p=48497&postcount=10
 
A more interesting question is whether there will be a discrete CPU in 5-10 years. CPUs stopped getting faster 5-10 years ago, and even mobile class CPUs are beginning to get close to high performance CPUs (within 2-5x lets say). GPUs are still rapidly scaling, and there's still a 10-30x gap between high and low end. This is leading to interesting things, like PS4 having a 70-30% or so split for GPU vs. CPU on the die. At what point do we call it a GPU with a built in CPU on the side?

Another important thing is that GPUs are beginning to be able to schedule their own work, without CPU intervention. Once you start throwing physics and animation entirely onto the GPU, along with scheduling drawing there, the interconnect between GPU and CPU becomes much less important, since it's really only used to load assets to the GPU the first time they're used, as well as some amount of data transfer for game logic, which is really not a lot more than updating pathing and such (and even a lot of this can run well on a GPU...).
 
If AMD were in the position that intel and nvidia are in their respective markets, it would be a straightforward answer as Pixel outlined before.
Though I don't see how dGPUs can survive once HSA comes to gaming. Some folks have been rumbling about it for some time now.

https://forum.beyond3d.com/threads/amds-unified-gaming-strategy-and-the-jhh-titanic.54499/

Of course the other question is, will there be an AMD 5 years from now? 10? And if they are, can they get the industry to adopt it? Assuming they can get it out satisfactorily, which has been another problem plaguing their competency in the market.
 
If AMD were in the position that intel and nvidia are in their respective markets, it would be a straightforward answer as Pixel outlined before.
Though I don't see how dGPUs can survive once HSA comes to gaming. Some folks have been rumbling about it for some time now.

https://forum.beyond3d.com/threads/amds-unified-gaming-strategy-and-the-jhh-titanic.54499/

Of course the other question is, will there be an AMD 5 years from now? 10? And if they are, can they get the industry to adopt it? Assuming they can get it out satisfactorily, which has been another problem plaguing their competency in the market.

HSA is not only linked to AMD: Qualcomm, Samsung, TI, etc .....

HSA is too supported by "dGPU" ....

For me try to respond too this dont lay on the increase of performance of APU/SOC, but on the dGPU side.. How will look a dGPU in 10years ? What will be his capacity? If we look only on the increase of performance of SOC/APU and project it on 10years, for an equality, we need too try to imagine what will could be the dGPU in 10years...

Then we can maybe try to respond to this question.
 
Last edited:
Ok, but what do you think the relevance of that is to PC gaming? HSA support is a whole different thing than the game programming itself changing around it.
Unless the APUs lag significantly behind dGPUs, I don't see what the evolution of dGPUs would accomplish either.
AMD might need to put in their version of jersey barriers to make it look better at the start though.

Maybe there can be a top top end where a niche of gamers fork out for very expensive pieces of hardware for more power. It'd still be an end to the current scenario.
 
A more interesting question is whether there will be a discrete CPU in 5-10 years.
In consumer facing systems, is there a modern product line that doesn't come with graphics on-die?
It's usually workstation/server products, and typically those that are above the entry-level repurposed desktop SKUs.
 
I could imagine there'd be socket, pin, perhaps motherboard trace & cooling complexities etc which will through some roadblocks into creating some all in one beast chip that would make top end 300w NVx80/AMDx800 gpus obsolete.

For some 300w singlechip solution, youd need some special motherboard with a gargantuan socket configuration different in size than one you'd need if you were to go for the 200w mid-highend lvl integrated gpu and different than the mid 100w integrated solution.

ATX standards probably wouldn't do it, as you'd probably need more space for an even bigger cooling solution on the underside of the mobo for the 200-300w varients.

I'm sure motherboard manufacturers would resist having something that further shrinks the consumer base for each product. Each store would have to have multiple motherboards of every socket type to satisfy each potential customer. Customers wouldn't be able to switch from high end low end gpu'd chips.
Asus instead of making 70000 of one motherboard model, would have to gamble at alot more unsold inventory with tens of thousands of each socket type motherboard. Theres be 3-4 sub catagories for each motherboard gpu power catagory varient. A store might have 2-3 of one socket type but none of another. Asus or the stores could end up with alot of unsold merchandise of a particular socket type.

Does High bandwidth memory (stacked gpu memory that is faster than hybrid memory cube) have to be soldered like gddr? Is the increased bandwidth demand increased power thus nessesitate soldering?
how will that work? Will the memory come on the motherboards increasing the cost of the motherboards forcing motherboard manufacturers to bare responsibility of warranteeing memory issues?
 
Last edited:
In consumer facing systems, is there a modern product line that doesn't come with graphics on-die?
It's usually workstation/server products, and typically those that are above the entry-level repurposed desktop SKUs.

I was going to mention Vishera (AMD FX-8000) but then I noticed you said "modern".
 
I imagine that 200+ watt GPU's will be the only ones available due to more powerful SoC/APU's which use ~250W making mid range GPU's irrelevant(assuming <100W for CPU and ~150W for GPU).
 
That's what I'm saying. dGPUs will be with us for many years for the same reasons we have them today.

Enthusiast level dGPUs and maybe High End dGPUs will survive. The midrange will be gone, except as legacy system components. And the lower High End dGPUs will likely be gone as well.

The question then becomes can a company specializing in dGPUs survive on Enthusiast and some % of high end dGPUs only? Nvidia evidently doesn't think so as evidenced by their efforts to diversify as much as possible. As well as to focus on markets outside of desktop computing. They saw the writing on the walls years and years ago and planned accordingly. Now they just need some of those plans to succeed (HPC is going well, Tegra not so much).

Regards,
SB
 
A top GPU 10 years ago was a single slot card like the 2005 GeForce GTX 7800. It drew about 100 watts.
By 2009 we had the dual slot GTX 295 at 290 watts. Five years later, in 2014 we had the triple slot Titan Z at around 400 watts.

So the trend is clear. In 2020 we'll have a quad-slot card drawing 500 watts.
 
A top GPU 10 years ago was a single slot card like the 2005 GeForce GTX 7800. It drew about 100 watts.
By 2009 we had the dual slot GTX 295 at 290 watts. Five years later, in 2014 we had the triple slot Titan Z at around 400 watts.

So the trend is clear. In 2020 we'll have a quad-slot card drawing 500 watts.
It might be the only way for dgpus to justify a performance jump over intergrated socs.
 
Back
Top