Irrelevant without knowing the power limit of the GPU. Mobile GPUs can show wildly different results depending on how much power they can use.Someone benchmarked the mobile 5080 in Geekbench:
18% faster than a mobile 4080 and 6% faster than a mobile 4090.
Agreed. and also if the GPU was prioritized over CPU -- many modern "gaming" laptops include software which will disable or limit CPU boost (Windows power settings -> Max CPU speed to 99% fully disables CPU boost) which provides measurably better performance to the GPU in terms of cooling. This in turn facilitates a more consistent and higher GPU clock.Irrelevant without knowing the power limit of the GPU. Mobile GPUs can show wildly different results depending on how much power they can use.
Of any game that's available, they would choose a CPU limited game? Let's wait for indipendent reviews.FC6 is most definitely CPU limited on the 5090 since it shows a higher gain on 5080 vs 4080 which makes zero sense otherwise.
APTR is a more GPU limited game so +40% is the more likely average result for 5090 vs 4090. Considering that we're looking at +30% or so FP32 change between 4090 and 5090 this seems like a solid enough gain really.
Or we can speculate..Of any game that's available, they would choose a CPU limited game? Let's wait for indipendent reviews.
The choice is weird and all I can think of is that FC6 is highly memory bandwidth sensitive which means that in theory it should provide higher than normal gains on Blackwell.Of any game that's available, they would choose a CPU limited game? Let's wait for indipendent reviews.
I think it will be between 20-40% depending on the game. Far Cry probably represents the lower end and Plague Tale the upper.What we can say definitively is that Nvidia is hiding real performance figures. Whether there’s a reason for it or not, we shall find out shortly.
I think it will be between 20-40% depending on the game. Far Cry probably represents the lower end and Plague Tale the upper.
Should have clarified I meant the 5090 specifically. These cheaper tiers will probably range between 10-20%.4080 super to 5080 will be interesting. Just by FLOPS it’s only 7 or 8 percent. In terms of architectural improvements and bandwidth, I’m not sure where that ends up. 20%?
I’m guessing RT performance will be a little bigger which is how you get to maybe 30 or 40% like Far Cry 6 in the slides.
I’m not sure what big architecture changes for the SMs and front end are on the table for them. DX13 feels a long way off but I’m not sure where things go as long as the programming model doesn’t change.
I’m not sure what big architecture changes for the SMs and front end are on the table for them.
Of any game that's available, they would choose a CPU limited game? Let's wait for indipendent reviews.
I suspect Nvidia’s architectures have a major starvation problem. Maybe they can do something to improve utilization at the expense of top line numbers.
The GeForce RTX 5090 GPU is equipped with three encoders and two decoders, the GeForce RTX 5080 GPU includes two encoders and two decoders, the 5070 Ti GPUs has two encoders with a single decoder, and the GeForce RTX 5070 GPU includes a single encoder and decoder. These multi-encoder and decoder setups, paired with faster GPUs, enable the GeForce RTX 5090 to export video 60% faster than the GeForce RTX 4090 and at 4x speed compared with the GeForce RTX 3090.
GeForce RTX 50 Series GPUs also feature the ninth-generation NVIDIA video encoder, NVENC, that offers a 5% improvement in video quality on HEVC and AV1 encoding (BD-BR), as well as a new AV1 Ultra Quality mode that achieves 5% more compression at the same quality. They also include the sixth-generation NVIDIA decoder, with 2x the decode speed for H.264 video.