Nvidia GeForce RTX 4080 Reviews

I'm talking about a GPU (for consumers, by the way), you appear to be trying to redefine a GPU to be something imaginary - good luck with that :)
I'm asking if HBM being integrated through an interposer (which is a more complex and advanced form of die to die interconnect) and itself being a 3D design made of several stacked dies is considered a "chiplet" design in general. It's not clear to me why it shouldn't be such really.

And the point I'm trying to make is that there are more than one way to do something as "chiplets". This can also be an area where one company would get an upper hand simply through choosing a better design for it's "chiplet" product.
 
Yep, eventually NVidia will put its IP across multiple chiplets to construct a GPU.

Probably but will it happen soon? According to AMD they were motivated by the diminishing returns of manufacturing cache and interfaces on advanced nodes. The question though is what did they have to sacrifice? Infinity cache latency benchmarks will be interesting.
 
I'm asking if HBM being integrated through an interposer (which is a more complex and advanced form of die to die interconnect) and itself being a 3D design made of several stacked dies is considered a "chiplet" design in general. It's not clear to me why it shouldn't be such really.

And the point I'm trying to make is that there are more than one way to do something as "chiplets". This can also be an area where one company would get an upper hand simply through choosing a better design for it's "chiplet" product.
Yes, agreed there's a variety of ways for a product to be a chiplet based product. e.g. Xenos can be described as a chiplet product, it's also a chiplet GPU. The pair of chips implements GPU functionality. You can say that Voodoo2:


is a chiplet GPU, if you want, but integration/packaging isn't in the same mould as current GPUs (and "GPU" was later coined as a term that includes geometry processing, not just pixel processing).

"Chiplet", you can argue is just a marketing term. The history of systems, and system integration is pretty colourful:


and "chiplet" arguably stretches back decades...

Probably but will it happen soon? According to AMD they were motivated by the diminishing returns of manufacturing cache and interfaces on advanced nodes. The question though is what did they have to sacrifice? Infinity cache latency benchmarks will be interesting.

AMD's claims about latency:

gjVHASzBujhMHvVwtGdTib.jpg


It's a cost-performance trade-off in the end.

For "data centre GPUs" we've seen NVidia contemplating chiplets as a way to specialise its designs based on sector:


With L2 cache in Ada being "finely" distributed across the GPU, it's hard to extract that as a function point that can be separated out into one or more chiplets. The stacking seen for 3D V-cache in Ryzen might be a way to provide localised pools of cache so that NVidia can retain the locality/scale seen in Ada?

It'll be interesting to see whether cache (on consumer GPUs) gets substantially larger than ~100MB as the next generations unfold. It might turn out to be the sweet spot. Similar to the way that 8/12/16/24GB appears to be the segmentation for current performance tiers while it looks unlikely that entry level cards will be 16GB within a decade, if ever.
 

That lack of demand and abundance of stock is evident on eBay, where many RTX 4080 cards are selling for around or just over their official store prices, a far cry from the bad times when GPUs were being scalped for three or four times their MSRP. VideoCardz reports that one scalper is offering six RTX 4080s from various manufacturers for MSRP. The seller writes that the "Market isn't what I thought.

Cry me a river.

4080-ebay.PNG

:ROFLMAO::ROFLMAO::ROFLMAO:
 
Last edited:
It's a weird situation in the UK at the moment. Seems to be loads of really well priced stock of 3070Ti and below, then nothing at all in the 3080/3090 range. Then tons of 4080's at around MRSP, then no 4090's.

Looks like people can see it for the rip off it is.
 
Is this even true???
The Newegg screenshot is accurate but the editorialized comment is misleading. A specific Gigabyte 4080 variant is the top-selling model (in whatever time scale they use, I'm not sure). To claim that the 4080 is the best selling GPU you would have to calculate the summation across all brands/models. Still, I suppose the fact that there are 3 4080 models in the top 10 means the card is competing well vs. popular midrange offerings (3060, 6600 etc.).

Amazon sales are a little different. 3060s seem to top the list, followed by a specific 7900XTX. But 4080s do show up in the top 10.
 
Back
Top