Playstation 5 [PS5] [Release November 12 2020]

All a company has to do is make a face plate that doesn't match Sony's design but just happens to have some pegs in the correct positions that the fit into the holes on the PS5. There's nothing that Sony can do to stop that. They can trademark the design used by the PS5, but they can't actually prohibit the sale of something that isn't the same as that design, but just "happens to fit".

Like say side panels that are shaped like Viking Shields with crossed Axes.

Regards,
SB

or make outside shape flat so we can lay the PS5 down horizontally without that wobbly stand
 
This doesn't seem right to me. Surely they must have been feed wrong information? They don't point out or include proper source, so I can't believe it to be accurate. Or something was lost in translation (auto-translation is doing a bad job)?

https://it.ign.com/ps5/175513/news/...console-non-supportera-la-risoluzione-a-1440p

I think the original news about the PS5 supporting 1440p came from a product listing on BenQ or somewhere like that. I don't Sony have ever made an official statement about it.
 
Little interesting but non verified info here from Redgamingtech.
He discussed with a source of devs who told him that the GPU hits the max clock frequency 95% of the time and lowers down to 2.1GHZ with intense cpu loads and less than 1 frame.
The cpu is the same of the desktop zen 2 but slightly customized with 8mb of unified l3 cache.
and goes more on details about the geometry engine, primitive shader and vrs.

 
Last edited:
Little interesting but non verified info here from Redgamingtech.
He discussed with a source of devs who told him that the GPU hits the max clock frequency 95% of the time and lowers down to 2.1GHZ with intense cpu loads and less than 1 frame.
The cpu is the same of the desktop zen 2 but slightly customized with 8mb of unified l3 cache.
and goes more on details about the geometry engine, primitive shader and vrs.

How do these guys go from "I was told about this by certain developer who I cannot indenepenlty verify" to "reason why Sony is not pushing tech talk with PS5 in comparison to PS3 and even PS4 is because of Jim Ryan". How would he or anyone else including developers know this? I guess this is discussed only amongst high ups in Sony.

But yea, 8MB sounds about right and it already leaked with Flute bench (4MB per CCX - 1/4th of PC Zen 2).

As for clocks, if that was true I think it would be pretty sweet. Even RDNA2 chips are not clocked that high judging by leaks (reference cards that is).
 
Has there been any confirmation from Sony that we can park PS5 games on a external drive to free space for other games?
 
How do these guys go from "I was told about this by certain developer who I cannot indenepenlty verify" to "reason why Sony is not pushing tech talk with PS5 in comparison to PS3 and even PS4 is because of Jim Ryan".

But yea, 8MB sounds about right and it already leaked with Flute bench (4MB per CCX - 1/4th of PC Zen 2).

As for clocks, if that was true I think it would be pretty sweet. Even RDNA2 chips are not clocked that high judging by leaks (reference cards that is).
Not feeling anything new from this video:

Rapid packed math int4/int8 are native in the CU for RDNA 1
Vega Primitive Shaders are
a) leverage compute shader dispatch
b) RDNA also introduces working primitive shaders. While the feature was present in the hardware of the Vega architecture, it was difficult to get a real-world performance boost from and thus AMD never enabled it. Primitive shaders in RDNA are compiler-controlled.[7]

@Ryan Smith writes:
The one exception to all of this is the primitive shader. Vega’s most infamous feature is back, and better still it’s enabled this time. The primitive shader is compiler controlled, and thanks to some hardware changes to make it more useful, it now makes sense for AMD to turn it on for gaming. Vega’s primitive shader, though fully hardware functional, was difficult to get a real-world performance boost from, and as a result AMD never exposed it on Vega.

So I'm not sure if the leak actually exposed anything that Cerny hasn't already said.
95% is a number used by Cerny. I guess dropping to 2.1Ghz is new. Well at least, for some it's new.
And Primitive Shaders are very comparable to Mesh Shaders, with some differences. I believe I was looking at the differences a long time ago, but I started to zone out as it got boring to read.

https://www.sashleycat.com/post/tech-babble-10-vega-shaders-and-gcn
Vega's geometry pipeline is very capable - incorporating an improved Primitive Discard Engine from the previous 'Polaris' design; an engine that can discard zero-area, or denegnerate (invisible) triangles before the GPU can draw them - otherwise wasting clock cycles drawing something that the user cannot see. It also has a huge L2 cache to keep more geometry data on chip and can re-use vertex data from earlier in the pipeline.

A lot of this should sound familiar, (working hard on discard) if you look back at everything that has been said in the past.
 
One thing I am not getting from that video is how exactly does GE cull triangles much earlier then Mesh Shaders with even more control and programability over it and why should VRS be handled inside of GE, how does that make any sense?

I think perhaps Sony didn't even care about VRS considering the fact that you get 5-10% performance increase (if you don't want to lose fidelity and bring out the artifacts). Might as well lower resolution few % and have a good upscaler no?
 
One thing I am not getting from that video is how exactly does GE cull triangles much earlier then Mesh Shaders with even more control and programability over it and why should VRS be handled inside of GE, how does that make any sense?

I think perhaps Sony didn't even care about VRS considering the fact that you get 5-10% performance increase (if you don't want to lose fidelity and bring out the artifacts). Might as well lower resolution few % and have a good upscaler no?
Putting aside the geometry engine cull triangle stuff I don't think Sony have the same VRS solution as MS, far from it. From what I understood MS use lower resolution shading in specific areas in order to save performance while Sony displays higher resolution stuff in strategic areas in order to visually improve the image quality at a quite low cost. Those are 2 very different objectives here. And like you I don't see the point of MS VRS. In the end they save ressources, yes, but not that much and it has a cost, well the final image does look lower resolution.
 
Putting aside the geometry engine cull triangle stuff I don't think Sony have the same VRS solution as MS, far from it. From what I understood MS use lower resolution shading in specific areas in order to save performance while Sony displays higher resolution stuff in strategic areas in order to visually improve the image quality at a quite low cost. Those are 2 very different objectives here. And like you I don't see the point of MS VRS. In the end they save ressources, yes, but not that much and it has a cost, well the final image does look lower resolution.
Yes doesn't seem to be easy to implement as well, but Gears 5 uses it (VRS2 quality mode) and it brings 5-12% increase in performance, but perhaps its a bit too much fuss for something you can effectively replace with slightly lower resolution (in time when resolution on 99% of the games is not locked and varies, nobody seems to care much anyway).

As for Sony I don't even know what and if anything they use, but that was my point, not using VRS won't matter all that much with dynamic resolutions.
 
And like you I don't see the point of MS VRS. In the end they save ressources, yes, but not that much and it has a cost, well the final image does look lower resolution.

Not to get into it in this thread but that's entirely the oppositive of the DigitalFoundry take with VRS Tier 2 usage.
 
Not to get into it in this thread but that's entirely the oppositive of the DigitalFoundry take with VRS Tier 2 usage.
Yea, they seemed to suggest that Tier 2 solves some of the issues present in Gears Tactics which was apparently a Tier 1 implementation. They also gave a bit of insight as to why it was the case as well.

Regardless... these features are just a means to an end anyway.. with the goal of improving performance/efficiency while maintaining high image quality. MS will have their features/optimizations, and Sony will have their features/optimizations. The games likely to benefit most from these will likely be exclusive to their respective platforms in the first place and thus not really comparable like-for-like anyway.

Maybe I'm wrong, but that's my feeling on it thus far.
 
Not feeling anything new from this video:

Rapid packed math int4/int8 are native in the CU for RDNA 1
Vega Primitive Shaders are
a) leverage compute shader dispatch
b) RDNA also introduces working primitive shaders. While the feature was present in the hardware of the Vega architecture, it was difficult to get a real-world performance boost from and thus AMD never enabled it. Primitive shaders in RDNA are compiler-controlled.[7]

@Ryan Smith writes:


So I'm not sure if the leak actually exposed anything that Cerny hasn't already said.
95% is a number used by Cerny. I guess dropping to 2.1Ghz is new. Well at least, for some it's new.
And Primitive Shaders are very comparable to Mesh Shaders, with some differences. I believe I was looking at the differences a long time ago, but I started to zone out as it got boring to read.

https://www.sashleycat.com/post/tech-babble-10-vega-shaders-and-gcn


A lot of this should sound familiar, (working hard on discard) if you look back at everything that has been said in the past.
with Sony's solution the discard is done from the beginning, avoiding all the pipeline operations (included the shading that would be made with VRS in the end), deferred vertex attribute shading.
 
with Sony's solution the discard is done from the beginning, avoiding all the pipeline operations (included the shading that would be made with VRS in the end), deferred vertex attribute shading.
No, these are two completely different things. Culling of geometry is done in geometry pipeline (Mesh Shaders and Geometry Engine). VRS is done during pixel shading phase, which means after MS/GE has already culled geometry. You will still have to do pixel shading, hence VRS existing with MS, they are two different things.
 
with Sony's solution the discard is done from the beginning, avoiding all the pipeline operations (included the shading that would be made with VRS in the end), deferred vertex attribute shading.
primitive shaders and mesh shaders replace the entire front end of our current pipelines.

Both of them would be done in the beginning, at least with respect to the pipeline.

Video here:

Top part of the chart is traditional FF pipeline.
Underneath is the compute queue
Underneath that is Mesh Shaders
And naturally that would be the Primitive shader pipeline as well.

Thinking out loud, one of the differences that I recall is that Mesh Shaders work specifically with Task Shaders, it was something that primitive shaders didn't. That's about all I remember though, I can no longer find the Vega whitepaper.

AMD Describes this as their NGG:
  • Next-generation geometry pipeline: Today’s games and professional applications make use of incredibly complex geometry enabled by the extraordinary increase in the resolutions of data acquisition devices. The hundreds of millions of polygons in any given frame have meshes so dense that there are often many polygons being rendered per pixel. Vega’s next-generation geometry pipeline enables the programmer to extract incredible efficiency in processing this complex geometry, while also delivering more than 200% of the throughput-per-clock over previous Radeon architectures.1 It also features improved load-balancing with an intelligent workload distributor to deliver consistent performance.
And I'm not sure if this feature ever saw day of light
Advanced pixel engine: The new Vega pixel engine employs a Draw Stream Binning Rasterizer, designed to improve performance and power efficiency. It allows for “fetch once, shade once” of pixels through the use of a smart on-chip bin cache and early culling of pixels invisible in a final scene. Vega’s pixel engine is now a client of the onboard L2 cache, enabling considerable overhead reduction for graphics workloads which perform frequent read-after-write operations.
 
Last edited:
No, these are two completely different things. Culling of geometry is done in geometry pipeline (Mesh Shaders and Geometry Engine). VRS is done during pixel shading phase, which means after MS/GE has already culled geometry. You will still have to do pixel shading, hence VRS existing with MS, they are two different things.
Yes, you are right, just that wanted to point out than VRS beneffits are implicit in GE.
 
primitive shaders and mesh shaders replace the entire front end of our current pipelines.

Both of them would be done in the beginning, at least with respect to the pipeline.

Video here:

Top part of the chart is traditional FF pipeline.
Underneath is the compute queue
Underneath that is Mesh Shaders
And naturally that would be the Primitive shader pipeline as well.

Thinking out loud, one of the differences that I recall is that Mesh Shaders work specifically with Task Shaders, it was something that primitive shaders didn't. That's about all I remember though, I can no longer find the Vega whitepaper.

AMD Describes this as their NGG:
  • Next-generation geometry pipeline: Today’s games and professional applications make use of incredibly complex geometry enabled by the extraordinary increase in the resolutions of data acquisition devices. The hundreds of millions of polygons in any given frame have meshes so dense that there are often many polygons being rendered per pixel. Vega’s next-generation geometry pipeline enables the programmer to extract incredible efficiency in processing this complex geometry, while also delivering more than 200% of the throughput-per-clock over previous Radeon architectures.1 It also features improved load-balancing with an intelligent workload distributor to deliver consistent performance.
And I'm not sure if this feature ever saw day of light
Advanced pixel engine: The new Vega pixel engine employs a Draw Stream Binning Rasterizer, designed to improve performance and power efficiency. It allows for “fetch once, shade once” of pixels through the use of a smart on-chip bin cache and early culling of pixels invisible in a final scene. Vega’s pixel engine is now a client of the onboard L2 cache, enabling considerable overhead reduction for graphics workloads which perform frequent read-after-write operations.
Reading about DX12 mesh shaders and AMD inplementation what I have not clear is if these are computed shaders run by the API (and so would take a good chunck of the CU proccess time) or have their own hardware block as GE seems to have. It seems AMD had a hardware block with primitive shaders after the geometry shaders and tessellation stages but seing it was coming to DX12 decided not to complicate it and replace it by compute shaders.

Skimming through some of the Cerny patents what they seem to look for is start dividing the screen in primitive tiles (similar to power vr famous tile deferred shading?) that depending on certain parameters (pov, frustum...) are culled and not rasterized or are processed in less resolution. I suppose the parameter thing avoids wasting cycles in the geometry stage until the scene geometry change.
 
Last edited:
Don't know about Primitive Shader but Mesh Shader really is compute shader that can export result to the rest of the pipeline, absolutely no assembling of geometry objects is done.
 
Back
Top