CES for the gaming chips seems to be expected.When is the expected reveal show/date ?
CES for the gaming chips seems to be expected.When is the expected reveal show/date ?
Oh ok. So Jan 2025CES for the gaming chips seems to be expected.
Yeah, they could probably launch some of them now but they seem to want as much 40-series stock cleared out as possible before the launch. Hopefully we’ll get at least one of the cards by end of January. Early word was they’d release the 5080 before the 5090, but unless it’s priced very competitively I’d think they’d want the halo card out first. In any case they’ll probably release within a few weeks of each other as usual - wouldn’t be surprised to see the rumored 5070 launch much later, in the spring. Of course political developments could affect all of this. I hope not but the possibility can’t be entirely discounted.Oh ok. So Jan 2025
The listing at Broadberry has put a price tag of $515,410.43 on the Blackwell DGX B200 AI system, with configuration options as well, mainly dealing with after-sale services. This is the first instance where we have seen NVIDIA's Blackwell AI product surface over the internet in the form of a retail listing, and while we are currently unaware of the supply situation, it is said that Blackwell will be initially confined, with a larger portion of shipments slated for the first quarter of next year.
When you look at FP8 training performance on the spec chart below, B200 is 2.5x faster than Hopper -- which is only 1.25x faster on a per-die basis. How did they get to that “5x faster training” number? Well, B200 has another new feature to double per-die performance over Hopper: 4-bit arithmetic.
4 bits doesn’t seem like a lot. If you’re using those bits to represent integers, you can only count up to 16. But Nvidia’s GPUs feature some really clever technology to squeeze the most utility out of those 4-bit numbers. They’re called “mixed precision tensor cores,” and if you want to understand Nvidia’s dominance at AI, you need to understand how they work.
...
The closer a network can get to being represented entirely with FP4 operations, the closer Blackwell’s training performance can get to that eye-popping 5x number Nvidia cited. And luckily, there’s already some research showing that networks can train with FP4 operations without significant loss of accuracy. If those results can scale to GPT-4-scale networks, then Nvidia has a huge advantage over other datacenter AI chips, which, as far as I can tell, don’t yet support these FP4 operations.
How well does it run Crysis though?
On a genuine note, why have they gone with Intel rather than AMD for the CPU? I thought AMD performed better these days?
They are waiting on gaming launch for that.I wonder why Nvidia hasn’t shared any details of the SM configuration, cache or clocks.
They are waiting on gaming launch for that.
Hopper is still Volta class, Blackwell is presumably different.Not sure why they would wait on gaming to share specs of an HPC/AI chip. At Hopper launch they spilled all the beans.
$$$$5070 Ti rumored to be based on GB203 with only 6% more SMs than the 4070 Ti Super and 16% more than the 4070 Ti. The optimist in me thinks Blackwell SMs must be a lot more efficient or clock much higher.
You know nothing about prices or performance of these parts so what makes you wish them having different names now?The rumours specs (all but confirmed) from the SKUs really makes me really wish 5080 was 5070 Ti (& GB203* being renamed into GB204), the 5070 Ti the 5070 and the 5070 the 5060 Ti etc.
*GB203 could've been a 128SM, 384 bit die and just below 500mm2 with a 24GB 5080 Ti & 20GB 5080 SKUs. But I suppose crazy AI demand makes Nvidia more desirable to just make a huge gap between GB202 & 203.
So the 5090 die is 20% larger than the 4090 die, bus is 512 bit with 32GB of VRAM, and PCIe 5, I am guessing this is going to be expensive as hell, hopefully with a performance uplift as impressive as the specs.