Current Generation Hardware Speculation with a Technical Spin [post launch 2021] [XBSX, PS5]

Status
Not open for further replies.
few interesting sentences
What I can say for sure now is PlayStation 5 and Xbox Series X currently run our code at about the same performance and resolution.
As for the NV 3000-series, they are not comparable, they are in different leagues in regards to RT performance. AMD’s hybrid raytracing approach is inherently different in capability, particularly for divergent rays. On the plus side, it is more flexible, and there are myriad (probably not discovered yet) approaches to tailor it to specific needs, which is always a good thing for consoles and ultimately console gamers. At 4A Games, we already do custom traversal, ray-caching, and use direct access to BLAS leaf triangles which would not be possible on PC.
Not sure if we’d go for mesh shaders in the future as we are not that dependent on traditional vertex/primitive/raster processing anymore on recent architectures. Our current frames are only about 10% raster and 90% compute on PlayStation 5 and Xbox Series X. And raster pairs well with async compute.
 
I didn’t like the beginning of exodus. So I stopped. But I suspect it could be better later on from reading reviews.

Alex seems to be happy here.

He’s getting in his deadlifts today
We are getting our first RT only title in AAA space - oh yeah I am happy! :)

Oles' interview answers there with WCCFTech are quite enlightening - we should have him back for this title on release with a long-form interview.
 
We are getting our first RT only title in AAA space - oh yeah I am happy! :)

Oles' interview answers there with WCCFTech are quite enlightening - we should have him back for this title on release with a long-form interview.
Yeah. I like how honest he usually is with his answers. He answers questions quite unlike most devs.
 
Choice quotes on silicon yield which is an interesting discussion bit to pocket for later:
Xbox Series X SoC: Power, Thermal, and Yield Tradeoffs (anandtech.com)
  • A 300 mm wafer has 706.86 cm2 of area
  • A defect rate of 0.09 defects per cm2 means ~64 defects per wafer
  • Scarlett is 306.4 mm2 (15.831 mm x 22.765 mm)
  • Note that SoCs are rectangles, and wafers are circular,
  • Wafer die calculators show that 100% yield of this SoC size would give 147 dies per wafer
  • Microsoft sets the frequency/power such that if all dies are good, all can be used
  • With a 0.09 / cm2 defect rate, there are 107 good dies per wafer
  • This means a 73% yield, 107 / 147A
Assuming a defect happens in one of the GPU compute units or WGPs, which is a very good chance because the GPU is the biggest part of the processor, by absorbing that defect and disabling that WGP, that SoC can be used in a console and the effective yield is higher.

When the defect rate is 0.09, which is nice and low, the chances that two defects occur on the same chip are very small. Even then, by choosing to run a design with only 26 WGPs enabled, two less than the full 28 WGPs, almost everything that comes off the manufacturing line can be used – an effective yield increase, reducing the average cost per processor by a third.

...

This would make the Series X sales around 2.33 million CPUs, suggesting a minimum of 16000 wafers total at 100% yield, or up to 21800 wafers at 73% yield. The real number is likely to be somewhere between the two, but you can see how much of an effect the configuration choice can have on getting product to market in time, as well as the cost per processor.


****
Yield is surprisingly lower than I thought even though I'm reading: Good Yield! Really does put into perspective the price points of PC GPUS. And it does sound like stock levels will be low for quite a while with respect for demand.
 
Really excited about this. I'm a fan of this game and 4a. That 90% compute number (wow!) is relevant to our discussion a day or two ago, I hope that makes the "renderers trend towards compute" prediction sound a little more concrete.
So my opinion that games are already heavy computed was even little too conservative ;d
 
So my opinion that games are already heavy computed was even little too conservative ;d
there's not too many of these titles out there yet ;)

Thousands of titles are released each year, just the major ones are expected to be more compute heavy.
 
https://www.4a-games.com.mt/4a-dna/...-for-playstation-5-and-xbox-series-xs-upgrade

"Metro Exodus will run at 4K / 60FPS with full Ray Traced lighting throughout on PlayStation 5 and Xbox Series X. The base game and DLC expansions will feature both our ground-breaking Ray Traced Global Illumination (RTGI) and the Ray Traced Emissive Lighting techniques pioneered in The Two Colonels expansion across all content."

I see the console editions will not have RT reflections. Bit of a bummer for console gamers seeing as ME has plenty of water, puddles, broken glass and windows within the environment.
 
Note that SoCs are rectangles, and wafers are circular,

Has there ever been any move towards a different pairing of shapes? I've taken the oblong in circle setup for granted, but why is it that there aren't, for example, hexagons within circular wafers, or just straightforward oblong wafers?

Even though it's what we suspected may be the case, is this the first actual confirmation that PS5 doesn't have hardware VRS?

Although it may be the case, I think it's still too early to call it without more solid data. It may be that the hardware is there, but the dev kits haven't exposed it yet. Sony seem to have prioritised a dev kit in which PS4 developers can quickly transition to extracting decent performance out of the PS5. VRS wasn't available for the PS4/Pro, and so exposing such hardware (if it's even there, that is) may have been decided not to yield much benefit when compared to the resources it would take Sony's dev kit engineers to expose it and developers to implement it.

Microsoft have taken the opposite approach of less mature tools, which presently results in lesser utilisation of their hardware (although their hardware is also powerful enough for it to not result in an appreciable performance difference with their main competitor,) but has also resulted in relatively widespread adoption of VRS.
 
If VRS brings 5-10fps increase, that can be simply clawed back by outputting slightly less pixels, or by using software VRS (ala MW), then it makes total sense why Sony would not implement it.

It might be required for MS, along with Mesh Shaders and SFS, just so they can unify DX12U across their whole family of devices, but for Sony? I think they would rather take having chip and dev kits ready well before and spending that budget somewhere else.

IMO good decision by Sony, one after the other (exploiting max clock rates, great IO/SSD, joystick etc)
 
Status
Not open for further replies.
Back
Top