The best information I've seen points to GCN 1.1, or something very close to it.PS4 isn't using GCN 1.0, is it? That'd be weird.
Obviously not, PS4 was alreaady GCN2, point was that the CU count is just a coincidence, Neo has GCN3 or GCN4 iGPU, but the CU count being same as Polaris is just a coincideneOfc if the 480 was not exist, and if i was thinking that this 36CU wil correspond to an older GCN archittecture with 36cu.Is there any corespondance with older GCN gpu ? what do you think ? that the neo is still using an 7000, hawaii series based gpu's on 28nm?
PS4 and XB1 did. PSNeo and Scorpio probably won't. Scorpio is coming late enough to be Vega-based (gfx ip lvl 9, while polaris is gfx ip lvl 8.x like tonga & fiji). If I had to guess by the 6 TFLOPS number, assuming it's just GPU flops, I'd put my money on 48 CU @ 1 GHzSo PS and Xbox will both use the same or very close GPU? Also that means that the Xbox version will be running at higher frequency?
What's the differences between the two levels?PS4 and XB1 did. PSNeo and Scorpio probably won't. Scorpio is coming late enough to be Vega-based (gfx ip lvl 9, while polaris is gfx ip lvl 8.x like tonga & fiji). If I had to guess by the 6 TFLOPS number, assuming it's just GPU flops, I'd put my money on 48 CU @ 1 GHz
We'll see when Vega is out what's the difference this time. More advanced one way or another, ISA changes etc.What's the differences between the two levels?
Indeed I'd expect most of the performance difference coming from more CU's.
So PS and Xbox will both use the same or very close GPU? Also that means that the Xbox version will be running at higher frequency?
Obviously not, PS4 was alreaady GCN2, point was that the CU count is just a coincidence, Neo has GCN3 or GCN4 iGPU, but the CU count being same as Polaris is just a coincidene
Digital Foundry: What are your thoughts on adopting Vulkan/DX12 as primary APIs for triple-A game development? Is it still too early?
Axel Gneiting: I would advise anybody to start as soon as possible. There is definitely a learning curve, but the benefits are obvious. Vulkan actually has pretty decent tools support with RenderDoc already and the debugging layers are really useful by now. The big benefit of Vulkan is that shader compiler, debug layers and RenderDoc are all open source. Additionally, it has full support for Windows 7, so there is no downside in OS support either compared to DX12.
Tiago Sousa: From a different perspective, I think it will be interesting to see the result of a game entirely taking advantage by design of any of the new APIs - since no game has yet. I'm expecting to see a relatively big jump in the amount of geometry detail on-screen with things like dynamic shadows. One other aspect that is overlooked is that the lower CPU overhead will allow art teams to work more efficiently - I'm predicting a welcome productivity boost on that side.
Axel Gneiting: We are using all seven available cores on both consoles and in some frames almost the entire CPU time is used up. The CPU side rendering and command buffer generation code is very parallel. I suspect the Vulkan version of the game will run fine on a reasonably fast dual-core system. OpenGL takes up an entire core while Vulkan allows us to share it with other work.
Could GCN3/4 be made "GCN2-compatible" with something as simple as microcode updates? I know it sounds like a hassle, but what if Neo just switches between 2 microcode updates depending on if you're running unpatched PS4 game or Neo-patched game?Code written for Neo's GPU is not interchangeable with what was written for the PS4's GPU. Other slides indicate new instructions with Neo as well.
However, that might also mean that whatever Neo has due to being backwards-compatible is not a complete match, since the confirmed ISA match between GCN3 and GCN4 leaves some conflicts with GCN2--unless the PS4's variation on GCN2 was different enough to leave room. The items Polaris brings at a physical level would benefit Neo, and might very well be necessary to get to a FinFET GCN. Another fun coincidence is that Neo's apparent clock is one of the Polaris power states.
Looks like the inter frame async to me (also explains always on Vsync with Vulkan, which is free since there is already additional inter frame bufferization for async). I wonder how much fps async brings on table for 4K because a few % framerate gain could not worth ~20 milliseconds losses in input latency (on FuryX if avg framerate is 50).Jean Geffroy said:Our post-processing and tone-mapping for instance run in parallel with a significant part of the graphics work
I'd think it's possible. The ACE/HWS could easily emulate prior generations with microcode. The actual ALUs would be another matter, but so long as they weren't deprecating instructions or reducing/removing fixed function hardware I don't see a problem. I haven't seen any technical details, but tonga/figi/polaris seem to share the same scheduling hardware as well with some new features backported. Curious to know if the prefetching with Polaris was backported since the ISA didn't actually change. In theory that was a scalar processor feature that was implemented without updating anything. That could be the "improved" drivers they released earlier in the year.Could GCN3/4 be made "GCN2-compatible" with something as simple as microcode updates? I know it sounds like a hassle, but what if Neo just switches between 2 microcode updates depending on if you're running unpatched PS4 game or Neo-patched game?
If I understood their technique correctly, that latency would be largely irrelevant as they are reprojecting similar to async timewarp with VR. The effect for hiding AFR latency with mGPU might be interesting as well. Not sure I've heard anyone discuss just how much latency ATW could bury. That might also play towards the vsync differences between AMD and Nvidia we're seeing.Looks like the inter frame async to me (also explains always on Vsync with Vulkan, which is free since there is already additional inter frame bufferization for async). I wonder how much fps async brings on table for 4K because a few % framerate gain could not worth ~20 milliseconds losses in input latency (on FuryX if avg framerate is 50).
Looks like the inter frame async to me (also explains always on Vsync with Vulkan, which is free since there is already additional inter frame bufferization for async). I wonder how much fps async brings on table for 4K because a few % framerate gain could not worth ~20 milliseconds losses in input latency (on FuryX if avg framerate is 50).
Have you checked whether turning off vsync has some actual effect? I'm asking because there are quite a few people saying that turning off VSync with Vulkan doesn't actually turn it off.This is actually not true (or a bug on Eurogamer's side) because I can perfectly disable Vsync or set Adaptive Vsync on my Fury X under Vulkan (with both the latest WHQL and beta drivers)
When turning off Vsync I get screen tearing and unlocked FPS..so I guess that it is indeed off .Have you checked whether turning off vsync has some actual effect? I'm asking because there are quite a few people saying that turning off VSync with Vulkan doesn't actually turn it off.
BTW Could you please test a few locations with Temporal AA / SMAA to check Async gains? It would be nice to know some numbers for Async on (TAA) vs Async Off (SMAA), especially in higher resolutions
Let's put it this way: if turning it off had no effect, you would be limited to whatever refresh rate your monitor is, and from many benchmarks we know that's not the caseHave you checked whether turning off vsync has some actual effect? I'm asking because there are quite a few people saying that turning off VSync with Vulkan doesn't actually turn it off.
BTW Could you please test a few locations with Temporal AA / SMAA to check Async gains? It would be nice to know some numbers for Async on (TAA) vs Async Off (SMAA), especially in higher resolutions
Could GCN3/4 be made "GCN2-compatible" with something as simple as microcode updates? I know it sounds like a hassle, but what if Neo just switches between 2 microcode updates depending on if you're running unpatched PS4 game or Neo-patched game?
There have been instructions dropped since GCN2, and there are cases where the freed encodings have been used by the new instructions for GCN3/4.I'd think it's possible. The ACE/HWS could easily emulate prior generations with microcode. The actual ALUs would be another matter, but so long as they weren't deprecating instructions or reducing/removing fixed function hardware I don't see a problem.
Wouldn't have been the first time that this is solved by a simple shader rewrite. I don't know the precise software architecture, but it is safe to assume that there is at least some type of hardware abstraction / driver layer, right?If there isn't some kind of microcode engine that has gone unmentioned either for the shared instruction fetch portion for the CU group, or at a CU level, some other solution would have to be found like a dual-mode decode block or a modest update to the PS4's ISA that sneaks in some additional functionality in the few places with encoding room.