3090 is ~10500K in TSE graphics score
4090 figure given is >19000
Indeed almost twice the 3090 non Ti performance, which would be quite impressive for normal raster performance.
3090 is ~10500K in TSE graphics score
4090 figure given is >19000
FinalWire Introduces AIDA64 v6.75
Zen 4 AVX-512 Benchmarks and GeForce RTX 4090 Monitoring
hmmm odd... I got much higher score for TSE. I can't share the exact score because it will automatically give out my source but let's say more than 20% higher...Supposedly 19.000 score in TimeSpy for the 4090. Puts it at around 65% faster than the 3090Ti.
hmmm odd... I got much higher score for TSE. I can't share the exact score because it will automatically give out my source but let's say more than 20% higher...
Edit: Oh I just saw the new Kopite7kimi tweet and it's more like it. So he was sandbagging
Now all we need are equally impressive games.
Supposedly 19.000 score in TimeSpy for the 4090. Puts it at around 65% faster than the 3090Ti.
hmmm odd... I got much higher score for TSE. I can't share the exact score because it will automatically give out my source but let's say more than 20% higher...
Edit: Oh I just saw the new Kopite7kimi tweet and it's more like it. So he was sandbagging
The jump from old&subpar Samsung 8nm to shiny TSMC 5nm should bring enormous gains by itself. That's not a bad starting point considering Ampere being a very good base.RTX4000 series are going to be very impressive, be it raster, RT or AI capabilities. Larger jump than Turing>Ampere. People will see soon enough
The jump from old&subpar Samsung 8nm to shiny TSMC 5nm should bring enormous gains by itself. That's not a bad starting point considering Ampere being a very good base.
The baseline are 10TF consoles…. There wont be another ’Crysis’ i think.
Pretty depressing when you realize that PC GPUs will be pushing 3-10x that number barely 2 years into the current console generation.
We're looking at ~2x at 450W if we go with these rumors. If these will hold it will be a bigger perf/watt improvement than what AMD promised for RDNA3.Lets hope so given the insignificant perf/w increase Ampere offered over Turing at the high end.
I understand, but i also see it from another angle, lower-end hardware, be it consoles or pc's usually always have set the baseline, even twenty-plus years ago. Its not really only the consoles fault, theres many gaming-pc's out there not being high-end either. Most pc's are what the consoles are into the generation. The same argument can be used for all the millions of PS4's, One S's and Nintendo consoles out there.
There were more games using more powerfull hw back then though, Doom 3, HL2, Far Cry, Crysis come to mind, but those types of games werent over-representing the market either, most games where following the baseline back then too. The landscapes have changed, for consoles too. Most games are multiplat these days, its not really like the PS2 days where you had a large amount of exclusives, much more than now.
Its a general change across all platforms from mobile and streaming to high-end console and pc gaming. Scaling and multiplatform is the new thing these days, and i dont actually think its so bad. Were just transitionining to unify platforms and games i think.
With this topic and new GPU's in mind, theres more reason to it now than ten years ago. Now with the advent of ray tracing, AI technologies, high FPS gaming (at higher resolutions) together with scaling becoming better than ever before, its not bad times to be in. In special now with Sony porting over their games to the PC platform, often with (much) higher fidelity and performance, even features like ray tracing being added. i foresee scaling becoming better and better going forward aswell. Exclusive high fidelity games ala Crysis, its just not the way forward in todays and the futures markets anymore, neither for PC or console hardware.
The cross platform dynamic is nice. More games on all platforms. You’re right that the hardware target isn’t defined by top end cards but that target moves much faster on PC (2-3 yrs) than on consoles (8-10 yrs).
[Does this mean] that there are other secrets in the memory subsystem not only increased L2$ size?
Re: shared memory, Hopper adds an optional "cluster" level. By grouping thread blocks into clusters, those thread blocks can share each other's shared.mem.
From section: [Distributed shared memory] in Nvidia Hopper Architecture
"Figure 13 shows the performance advantage of using clusters on different algorithms. Clusters improve the performance by enabling you to directly control a larger portion of the GPU than just a single SM. Clusters enable cooperative execution with a larger number of threads, with access to a larger pool of shared memory than is possible with just a single thread block."
DSMEM may only be for Hopper? Any rumours?
Actually it's not that bad this time because the last console generation leap was smaller and the hardware was even weaker where contemporary mid-range GPUs were able to outperform PS4/XBO before their release at very affordable prices. In fact, you could build a PC with similar or better performance at the same cost just a few months after their release (750 ti+cheap i3). Besides, games continued to push PC hardware as they always have. This time around the console CPUs get a decent upgrade as well so it will be interesting to see how that translates into CPU usage in upcoming games.Pretty depressing when you realize that PC GPUs will be pushing 3-10x that number barely 2 years into the current console generation.