AbsoluteBeginner
Regular
Around 2070S I guess. Perhaps a bit lower depending on BW and clock speeds.Yes true, it's probably around 2080 and 2080s depending on the situation. it should be, 12TF lies around those.
Where do you put PS5?
Around 2070S I guess. Perhaps a bit lower depending on BW and clock speeds.Yes true, it's probably around 2080 and 2080s depending on the situation. it should be, 12TF lies around those.
Where do you put PS5?
I could be wrong here but I thought XSX had a different pool for the CPU of memory
specifically set aside for the CPU ?
The 2070 Super and the 2080 are close. Maybe they are 10~15% apart. The Series X GPU can easily drop 10% below the 2080 if starved for bandwidth. Worse yet the Series X has a portion of it's RAM at around 336GB/s, which can degrade performance even further.Sure it can, if you have enough of it. I dont even understand your point. Nothing we know of current Navi vs RTX2000 series, points to XSX GPU, which will be Navi 2, being somewhere between 2070-2080.
12TF lies around those
Though I am open for correction.
And how much advantage does XBX have over RX580 in terms of BW (that it still needs to share with CPU)? Or PS4Pro vs 470 for that matter?The 2070 Super and the 2080 are close. Maybe they are 10~15% apart. The Series X GPU can easily drop 10% below the 2080 if starved for bandwidth. Worse yet the Series X has a portion of it's RAM at around 336GB/s, which can degrade performance even further.
Series X: 560GB/s
2070 Super: 448GB/s
2080: 448GB/s
2080 Super: 495GB/s
At best case, the Series X has a 112GB/s advantage over the 2080 and the 2070 Super, is this enough to overcome the CPU/GPU contention? especially when the CPU is fully loaded with 16 threads? my guess is a resounding NO. Though I am open for correction.
For example if there are no other examples of a comparable power draw we may conclude that unless you want to use AVX - you're good and your GPU will always run at 2.23.
Around 2070S I guess. Perhaps a bit lower depending on BW and clock speeds.
2080-level cards run much faster on their boost clocks and have a parallel INT pipeline.
If we measure theoretical flops that places them way above 12
40%?Infinity fabric will likely be better at handling memory contention between CPU and GPU. Will likely be less of an impact this gen vs last gen. AMD has improved their APU memory performance over the years due to constraints in the PC APU space. I don't think the issue will be as bad as some make it out to be but likely the XSX will have a ~40% GPU bandwidth advantage vs the the PS5, which will not be easy to overcome.
Sorry, I must have miss calculated, it's probably around 30% if we assume about 60GB/s going to the CPU for both consoles.40%?
Actually just naively scaling from RX 5700 XT, 12 TF RDNA2 should be somewhat faster than 2080S even if it was only as efficient per clock as RDNA1, but AMD claims RDNA2 to get more work done per clock too (real work, not theoretical flops-work)Yes true, it's probably around 2080 and 2080s depending on the situation. it should be, 12TF lies around those.
Where do you put PS5?
Actually just naively scaling from RX 5700 XT, 12 TF RDNA2 should be somewhat faster than 2080S even if it was only as efficient per clock as RDNA1, but AMD claims RDNA2 to get more work done per clock too (real work, not theoretical flops-work)
I'm not comparing to flops, theoretical or whatever it clocks itself to in real situation, I'm comparing to realworld performance in variety of games (aka TPU numbers) by assuming the RDNA2/XSX is as fast as 5700 XT per FLOP, so RDNA2/XSX performance would be 5700XT * 1.246, which would end up faster than 2080S in all but 4K, add to that AMD's claimed improvements on performance per clock and it should be faster regardless of resolutionNV states default TF but they actually have higher numbers. A 2080 is actually around 11TF for example, or thats what some have explained to me here. I did the same comparison as you before.
so RDNA2/XSX performance would be 5700XT * 1.246
Unfortunately performance doesn't scale up linearly at all.
Will be interesting how much L3 they each have.
No, it doesn't, but that doesn't take into account the architectural improvements either.Unfortunately performance doesn't scale up linearly at all.
I gave this some thought, and taking in some of the discussion from Shifty, Fox, Brit, some of the other B3D members talking about ML/AI and Alex, I actually think Alex is right here.I happen to think he's probably wrong and the procedural work will be handled up front and put onto ssd. I think the reason it was done in memory was because of the hdd performance. With a fast ssd they can probably make big gains up front and save a good chunk of the computing power. It's a good topic for discussion.
Edit: I do think there's a larger industry discussion to be had about the cost of games. Gamers are expecting a lot, but without having the cost of games go up, I'm not sure how many development studios are actually financially equipped to handle it. How many flops can Ubisoft handle if the cost of game dev goes 25% or 50% to meet consumer expectations for this new hardware.