It was in the FTC leaks, the Series X GPU was supposed to have a few more CUs but they failed to secure the silicon and Phil apologized for it. I will provide it here for context once I find it.Link?
It was in the FTC leaks, the Series X GPU was supposed to have a few more CUs but they failed to secure the silicon and Phil apologized for it. I will provide it here for context once I find it.Link?
It was in the FTC leaks, the Series X GPU was supposed to have a few more CUs but they failed to secure the silicon and Phil apologized for it. I will provide it here for context once I find it.
Link?
That makes no sense to me. The GPU as it is already has too many WGP per shader engine. Any more would have bottlenecked the GPU further.
I would like to see the exact quote.
This is from this Verge articleSpencer admits that Microsoft had a “yield miss for Scarlett,” the codename for Xbox Series X / S consoles, but that it and the delay to Halo Infinite weren’t the “main factors in our console scarcity” during the initial launch period. Microsoft had cut the compute units on its Xbox Series X silicon down to 52 from 56 to improve production yield, but the company still missed a target for hardware production of the console. Microsoft also had a strategic bet on cloud that it appears to have been holding back chips for.
That's a yield miss. If anything that suggests they should have lowered the clocks on the X or disabled even more CUs (down to 48).This is from this Verge article
I remember reading more about it in the actual leaks unfortunately I deleted the emails and only kept the pdfs with the roadmap. But yes Sony has had a much better roadmap than MSFT this gen. They have made better decisions and I have confidence the PS5 pro will meet expectations of consumers especially in time for GTA 6 and Death Stranding 2.
The target was 56 CUs!That's a yield miss. If anything that suggests they should have lowered the clocks on the X or disabled more CUs.
Not it wasn't. That's the full chip. There was no way any console will chip with GPU compute units all enabled.The target was 56 CUs!
It was in the FTC leaks, the Series X GPU was supposed to have a few more CUs but they failed to secure the silicon and Phil apologized for it. I will provide it here for context once I find it.
What you're saying in terms of boosting the clocks is plausible. I just think presenting the Series X as an alternative to the PS5 pro is going to be a tough sell. It would be better placed against the base PS5. Because consumers actually look at the comparative output, which despite the better hardware, the Series X hasn't shown a reason why its better than the PS5 in real world games. Assuming the PS5 pro releases and its hw accelerated AI upscaling doesnt perform as well then that would be a marketing opportunity for MSFT with the Series X. But with the specs leak and Sony's track record, I think the PS5 pro is going to knock it out of the park and make this positioning quite difficult.Considering that MS needs to respond to the PS5pro in some way, a tuned SeriesX/S is obvious. If I just look at how quiet and how slowly the fan of the Series X rotates, in my opinion there is potential for a significant increase in clock speed, which thus affects the entire APU. I think a 15% increase is achievable, so we can already talk about 14 Tflops in the case of Series X. If they make use of the int4/8 processing that is in the Xboxes, then this can be an elegant answer.
Oh yes. I am used to compare IQs using compressed sources and was really impressed by PSSR compared to both TAAU and FSR2. And the pic used was actually one of the best case scenario for FSR2. But the best comparisons will be against DLSS2.I'm really curious to see how PSSR stacks up. There's that image of a Ratchet and Clank: Rift Apart comparison from MLID.. and while I'm not going to get into the validity of it... it does show a decent improvement over the games TAAU mode, and FSR2.
It actually prompted me to fire up Ratchet again on PC to test them out, since I believe that FSR2 got an improvement in a later patch, I was curious to see if it was appreciably better at all.. and, well.. whatever the case may be, FSR2 in Ratchet is still complete trash. Actually even worse than I remember lol.
I made a quick vid very clearly demonstrating its issue. I don't really even need to mention this since it's so obvious.. but focus on the power lines swinging. First I show TAA, then IGTI, then XeSS, DLSS, and finally FSR2. Res was set to my monitors native resolution 3840x1600, and all were tested in performance mode.. so essentially 50% res on each axis.. meaning 1920x800 input res.. and then finally at the very end I show that even Quality FSR2, is worse than any other reconstruction technique at performance.
My main reason for posting this is that Sony's solution will undoubtedly provide higher quality than the terrible FSR2 which is plaguing console image quality atm. So even if that's all it did.. it will be a massive improvement for IQ in upcoming games.
PSSR is another name for PSML MFSR (Playstation Machine Learning Multi Frame Super Resolution **cough). They needed to use a short name to market it (against DLSS), obviously.So going by this, can we assume that PS5Pro will have an NPU like hardware block? It says "fully custom" on the slide, and "Accessed only by libraries" seems like they want to keep it completely isolated from coders.
BTW "PSSR" seems to have little to nothing to do with the "Spectral Upscaling" patent posted a few pages back. This is definitely talking about using similar inputs to DLSS and FSR.
"PlayStation Spectral Super Resolution" seems to be kind of new marketing name for something that had been called "PSML MFSR" in the SDK documentation.
They'll react the way they reacted 17 months later with X1X after the first serious Pro leak. They'll come up with bigger specs... likely one year later. Not sure how they'll react to PSSR though as this is software, not hardware.Considering that MS needs to respond to the PS5pro in some way, a tuned SeriesX/S is obvious. If I just look at how quiet and how slowly the fan of the Series X rotates, in my opinion there is potential for a significant increase in clock speed, which thus affects the entire APU. I think a 15% increase is achievable, so we can already talk about 14 Tflops in the case of Series X. If they make use of the int4/8 processing that is in the Xboxes, then this can be an elegant answer.
They'd probably trade blows in much the same way as the PS5 and Series X do now.That wouldn't be a refresh, that would be a near pro-style console.
If we're keeping the "refresh" to 52CUs and going to RDNA4 + Zen4/5 + increased clocks, you might have your Series X "refresh" outperform the PS5 Pro.
ITGI is the best option outside of DLSS IMO. It's crazy to me that people claim FSR 2 as superior. Controversial take, but FSR 2 is worse than FSR 1.I'm really curious to see how PSSR stacks up. There's that image of a Ratchet and Clank: Rift Apart comparison from MLID.. and while I'm not going to get into the validity of it... it does show a decent improvement over the games TAAU mode, and FSR2.
It actually prompted me to fire up Ratchet again on PC to test them out, since I believe that FSR2 got an improvement in a later patch, I was curious to see if it was appreciably better at all.. and, well.. whatever the case may be, FSR2 in Ratchet is still complete trash. Actually even worse than I remember lol.
I made a quick vid very clearly demonstrating its issue. I don't really even need to mention this since it's so obvious.. but focus on the power lines swinging. First I show TAA, then IGTI, then XeSS, DLSS, and finally FSR2. Res was set to my monitors native resolution 3840x1600, and all were tested in performance mode.. so essentially 50% res on each axis.. meaning 1920x800 input res.. and then finally at the very end I show that even Quality FSR2, is worse than any other reconstruction technique at performance.
My main reason for posting this is that Sony's solution will undoubtedly provide higher quality than the terrible FSR2 which is plaguing console image quality atm. So even if that's all it did.. it will be a massive improvement for IQ in upcoming games.
Because the very low rendering used in these tests and game performance modes in general is responsible for the poor image quality, not FSR or any upscaler per se!ITGI is the best option outside of DLSS IMO. It's crazy to me that people claim FSR 2 as superior. Controversial take, but FSR 2 is worse than FSR 1.
He is wrong, rtx3090 is weaker with 285 tops all numbers for series 4xxx for int8 tops is doubled and I have no idea how nvidia calculate it, could be marketing edit: my mistake ti version is 320topsAccording to a known dev at era 8-bit 300 TOPS is equivalent to a 3090Ti, and about 20% lower than a 4080. He thinks that's well enough to properly reconstruct to 4K similarly to those GPUs using DLSS.
We have no idea how AMD are calculating their numbers either.He is wrong, rtx3090 is weaker with 285 tops all numbers for series 4xxx for tops is doubled and I have no idea how nvidia calculate it, could be marketing edit: my mistake ti version is 320tops