NVIDIA GeForce RTX 50-series Blackwell Availability

Today, just like the crypto-mining mania, there's this "AI" mania. I know people who are buying 5090 not because they are gamers but because they want to do edge AI. They are also more likely to be willing to pay more. I also heard some rumors about some people in China are replacing 5090's memory with chips of twice the size, making them 64GB, and also some rumors of the 3 bits modules with potential to 96GB. Together with the ban of AI chips to China, I guess this is just going to get even worse.
There are 4GB GDDR7 chips?
 
Technically the standard goes up to 8GB/chip but right now we're getting only a small number of expensive 3GB chips.
4, 6 and 8 should all be possible in the future though.
That's what I thought. I'm wondering how are they modding 64GB on a 5090? I thought there had to be some PCB considerations to do clamshell.
 
There are 4GB GDDR7 chips?

I'm guessing it's referring to doubling VRAM by making them double sided, not using 4GB chips which don't exist.

While the standard allows for it whether or not it's technically and/or financially practical for them to be deployed in GDDR7's lifespan would be debatable. GDDR6 in spec allowed for 1GB, 1.5GB, 2GB, 3GB, and 4GB. Only 1GB and 2GB were ever deployed in market.

That's what I thought. I'm wondering how are they modding 64GB on a 5090? I thought there had to be some PCB considerations to do clamshell.

I think it should be kept in mind with China we aren't talking about just modding these at home with just a soldering iron or something. It's companies with proper equipment and trained professionals. They've already done it at scale with the 4xxx series.
 
I'm guessing it's referring to doubling VRAM by making them double sided, not using 4GB chips which don't exist.

While the standard allows for it whether or not it's technically and/or financially practical for them to be deployed in GDDR7's lifespan would be debatable. GDDR6 in spec allowed for 1GB, 1.5GB, 2GB, 3GB, and 4GB. Only 1GB and 2GB were ever deployed in market.



I think it should be kept in mind with China we aren't talking about just modding these at home with just a soldering iron or something. It's companies with proper equipment and trained professionals. They've already done it at scale with the 4xxx series.
They are making custom PCBs? And wouldn't the firmware also have to be changed? Or does it autodetect when 2 modules are present on a channel?
 
They are making custom PCBs? And wouldn't the firmware also have to be changed? Or does it autodetect when 2 modules are present on a channel?

I haven't looked into this in detail. From what I understand is that this became publicly aware when companies started making them available to the public. It's not a situation in which some enthusiast was modding them and documented how they did it exactly.



In terms of the firmware it's worth noting that Nvidia's firmware signing I believe was broken (at least some degree) in 2023. So firmware level modifications being a requirement may not be a barrier in itself.

 
I haven't looked into this in detail. From what I understand is that this became publicly aware when companies started making them available to the public. It's not a situation in which some enthusiast was modding them and documented how they did it exactly.



In terms of the firmware it's worth noting that Nvidia's firmware signing I believe was broken (at least some degree) in 2023. So firmware level modifications being a requirement may not be a barrier in itself.

I've seen both custom PCBs, and reusing old 3090 PCBs to make the 4090 franken-cards. Apparently the 4090 is pin-compatible with the 3090, and since the 3090 was a clamshell design when it was introduced, you just swap the 4090 GPU and the newer 16gbit density GDDR6x chips onto the 3090 PCB, flash the firmware and away you go.

I suspect the firmware is modified/customized, but they apparently work just fine out of the box with regular standard Nvidia drivers.

 
Availability in the US (Newegg/Microcenter) seems decent now except for the 5090. Took about 3 months to get there for the 5080 and a bit less for the 5070s. Not bad compared to other launches considering all the doom and gloom. Now to see how sticky these prices are over the next few months.

5080 is starting at $1400, 5070 Ti at $900, 5070 at $600 and 5060 Ti 16 GB at $480. The 40% markup on the 5080 is the standout.

5080-stock.png
 
^^ TL;DV: 5070 and 5070Ti have better perf/price than 9070/XT due to current retail prices.
If anyone's been following the availability and pricing for last months this isn't news when we're looking at USA.
9070XT is selling at the same price level as 5070Ti while 9070 is at least +$100 to 5070.
 
^^ TL;DV: 5070 and 5070Ti have better perf/price than 9070/XT due to current retail prices.
If anyone's been following the availability and pricing for last months this isn't news when we're looking at USA.
9070XT is selling at the same price level as 5070Ti while 9070 is at least +$100 to 5070.

I’m anxiously awaiting the flood of angry YouTube videos about 9070 pricing and availability. Any day now.
 
Doom: Dark Ages is the next unoptimized game for nVidia user: https://www.computerbase.de/artikel...s_in_wqhd_uwqhd_und_ultra_hd_mit_hwraytracing

5090 is 47% faster than the 9070XT with "hardware" raytracing. I think is quite clear that these developers have no clue how to optimize their engines for modern nVidia GPUs and wasting so much potential. 100 FPS in 1440p with rasterizing and simple Raytracing on a 120 TFLOPs and 1.8 TB/s GPU is a joke.

Dont think that nVidia should put much wafer into gaming for the next years. Doesnt matter for them and game developers dont care about anything anymore.
 
Doom: Dark Ages is the next unoptimized game for nVidia user: https://www.computerbase.de/artikel...s_in_wqhd_uwqhd_und_ultra_hd_mit_hwraytracing

5090 is 47% faster than the 9070XT with "hardware" raytracing. I think is quite clear that these developers have no clue how to optimize their engines for modern nVidia GPUs and wasting so much potential. 100 FPS in 1440p with rasterizing and simple Raytracing on a 120 TFLOPs and 1.8 TB/s GPU is a joke.

Dont think that nVidia should put much wafer into gaming for the next years. Doesnt matter for them and game developers dont care about anything anymore.
I think the game is probably optimised fine for NV hardware - it is just the RT here even on "Ultra Nightmare" is not exactly high-end stuff. RT Reflections are in, but they are pretty simple and do not apply to rough materials. Do not forget, the RT effects here are are designing it running on old RDNA 1.5/2 consoles, so it has to be ultra fast simple material stuff. The only issue is - at launch - the RT settings do not scale much higher than that simple stuff. Path Tracing later presumably changes that.
 
CB.de has started using upscaling in all of their benchmarks, and while this is likely closer to how people actually play games these days it diminishes their GPU testing as everything above mid-range ends up being CPU limited to various degrees.

If Doom runs similar to The Great Circle then this one has some very high CPU load in it's default console RTGI mode and you basically have to use PC exclusive PT mode to put the bottleneck back onto high end GPUs.

That being said 9070XT being higher than 5070Ti is abnormal even in case of high CPU limitation.
 
CB.de has started using upscaling in all of their benchmarks, and while this is likely closer to how people actually play games these days it diminishes their GPU testing as everything above mid-range ends up being CPU limited to various degrees.

If Doom runs similar to The Great Circle then this one has some very high CPU load in it's default console RTGI mode and you basically have to use PC exclusive PT mode to put the bottleneck back onto high end GPUs.

That being said 9070XT being higher than 5070Ti is abnormal even in case of high CPU limitation.
Something does look wrong with Doom on Blackwell right now. I see the 4070 matching the 5070 and the 4080 beating the 5080 which should probably never happen.

 
Back
Top