Next Generation Hardware Speculation with a Technical Spin [pre E3 2019]

Status
Not open for further replies.
Wouldn't PS5 be using OpenGL or other in-house related SDK APIs & toolchains, especially their first-party teams?

They would, but it's not really a relevant distinction. It's just pointing out that the software API for RT (DXR) currently being used to make games can do RT using non-dedicated hardware. That's the point.
 
I doubt it has dedicated RT hardware. It’s probably just RT via compute, which is how everything does RT in DXR except for Turing.

Vega VII is 64 CUs with a 4096 bit HBM2 interface and hardware for INT8 and FP64. All in 330mm^2. Zen 2 chiplet is 70mm^2. 56 CUs to yield 52 may be fairly close.

Right so 52 CU's even at a fairly optimisitc1.5GHz clock speed is going to give 10TFlops. A lot of rumours and wishful thinking I have seen on other forums are expecting it to be around 14TFops. I just don't see how that is happening.
 
Wow, so no secret sauce and standard x86 architecture.. I'm shocked I tell you.

So we still need to know, HT yes or no, how much RAM (and what type), how many TF and the price.

Edit: I'm more excited about audio enhancements and a new upgraded PSVR.
 
but it's not really a relevant distinction.

Does OpenGL support RT, or RT in the manner which DXR is capable of doing? If not, then a distinction can be made. That Sony's way of doing RT might be more hardware eccentric, rather than a OpenGL driven solution of doing RT (which I haven't seen as of yet, other than some hack projects running terrible).
 
Does OpenGL support RT, or RT in the manner which DXR is capable of doing? If not, then a distinction can be made. Is Sony's way of doing RT more hardware eccentric, or more software driven by an OpenGL RT solution (which I haven't seen as of yet, other than some hack projects running terrible).

The API used is irrelevant as to whether a solution is more hardware or software.
 
Right so 52 CU's even at a fairly optimisitc1.5GHz clock speed is going to give 10TFlops. A lot of rumours and wishful thinking I have seen on other forums are expecting it to be around 14TFops. I just don't see how that is happening.
The latest Gonzalo product code suggests a 1.8GHz GPU clock.

Wouldn't PS5 be using OpenGL or other in-house related SDK APIs & toolchains, especially their first-party teams?

Yes, I didn’t meant to imply PS5 DXR usage - merely that it’s simply a compute operation enabled by some API.

The API used is irrelevant as to whether a solution is more hardware or software.

Hardware-accelerated would be more apt, no? Any attempt to do RT is going to do so leveraging features of the underlying hardware.
 
The API used is irrelevant as to whether a solution is more hardware or software.

How? Please explain. I don't claim to know everything, but willing to understand. How are different API standards not relevant? I always thought OpenGL and DirectX have very distinctive ways towards rendering vector graphics. Meaning, if DirectX (DRX) is capable of rendering raytracing across the hardware in manner where specialized RT logic/cores aren't necessary, does that apply to the current OpenGL as well?
 
Glad we don't have to deal with hard drives any more (and the people claiming it will never happen).

I'm expecting a hybrid system, perhaps something like Apple's 'Fusion' drive approach. If they're doing something ridiculously fast as claimed ("Cerny claims that it has a raw bandwidth higher than any SSD available for PCs") then it's going to be small in size. Or expect this box to be a lot more expensive.

Of course it is possible that Sony are using some genuinely new tech with the promise of speed, reliability and size. Actually, you can probably scratch reliability or rather the number of write-cycles. There are lots of promising pre-production solid state storage technologies whose only drawback is that it's limited to < 1,000 write-cycles which probably wouldn't be an issue in a console.
 
How? Please explain. I don't claim to know everything, but willing to understand. How are different API standards not relevant? I always thought OpenGL and DirectX have very distinctive ways towards rendering vector graphics. Meaning, if DirectX (DRX) is capable of rendering raytracing across the hardware in manner where specialized RT logic/cores aren't necessary, does that apply to the current OpenGL as well?
Yes same for Vulkan etc. APIs are irrelevant in the way a feature is implemented in the GPU. AMD/Intel/NV choose the way in which they accelerate said feature etc.
 
How? Please explain. I don't claim to know everything, but willing to understand. How are different API standards not relevant? I always thought OpenGL and DirectX have very distinctive ways towards rendering vector graphics. Meaning, if DirectX (DRX) is capable of rendering raytracing across the hardware in manner where specialized RT logic/cores aren't necessary, does that apply to the current OpenGL as well?
The API is there to generalize the functions in a way that developers can expect. As APIs are there to generally make the lives of developers easier by providing a set of functions that they can bank on working as long as the API is supported.

So specific set of functions always takes in the same type of inputs and always outputs the functionality as listed on the description.
The role of the drivers, supported by the IHVs or in this case, sony, would be responsible for determining how the function is carried out through the hardware, exposing whichever hardware features they wish to or not wish to.
 

On the GPU side of the equation, a custom variant of AMD's upcoming Navi architecture is also confirmed, but this is where details are very thin on the ground. The understanding we have is that on the one hand, Navi is a new iteration of the existing AMD Graphics Core Next (GCN) architecture, which suggests a structural limit of 64 compute units or 4096 shaders. But on the other, certain leaks have suggested that Navi is geared more towards pixel-pushing as opposed to its immediate predecessor, the more compute-orientated Vega. I wouldn't underestimate the 'custom' side of the equation either: Sony has spent years on this project and with PS4 Pro, the firm has shown how it's prepared to innovate in areas that PC gaming is only now starting to get to grips with. Secret sauce? Quite possibly.

Very fun times ahead... :yep2:
 
The SSD isn't a surprise and I'll say I called it. "I know it’s impossible, but can we have an SSD?" said every dev ever because it changes the fundamental way you can access data. Interestingly though, it sounds like Sony have gone very specialist which I think smart. I expect it to be smaller if it's that much faster as they claim. I think faster SSD and more medium level RAM is probably the better balance for game worlds (not visuals) than slower storage, faster RAM.

As for hardware RT support, we need to wait and see how exactly and what sort of performance it's getting. Very nice to hear that it's being included for flexibility though, not just graphics. And it should tie in with the 3D audio too, which is very exciting. My surround headphones definitely give me an edge in Apex Legends.
The RT is fake, an insider told me Sony is using a cell powered GPU to do Ken tracing, this thing does 10million Kutaragis/second... Let that sink in....
 
I doubt it has dedicated RT hardware. It’s probably just RT via compute, which is how everything does RT in DXR except for Turing.
Though with limited performance making it a very costly, and thus to be avoided, solution.

I have to correct myself though, that the article only says the GPU supports raytracing, and nothing about acceleration, so yes, it could just be RT on compute same as any DX12 class GPU, which would make it a feature tick-box rather than something useable and used.
 
Though with limited performance making it a very costly, and thus to be avoided, solution.

I have to correct myself though, that the article only says the GPU supports raytracing, and nothing about acceleration, so yes, it could just be RT on compute same as any DX12 class GPU, which would make it a feature tick-box rather than something useable and used.
Agree. They probably felt like they would be criticized if they didn’t support it, or that they expect MS to say they support it and don’t want to be left out.
 
I doubt it has dedicated RT hardware.
Don't, at the very least there will be special instructions to accelerate/accommodate ray tracing. The fact that the system architect was keen on mentioning the feature in the context of a custom navi GPU and a custom ray traced audio unit gives a strong indication of some sort of hardware solution.

I think this thread has had enough of console RT doubters at this point.
 
Audio revelation is interesting, MS went this route with current generation and many of us said it was a waste of resources because unless you have an expensive speaker set up or resign yourself to using headphones you won't notice the difference.

I think that's true here too, perhaps it makes sense if you think VR will take off but I'm somewhat skeptical that advance audio is something much if the public is in a position to take advantage of.
 
Status
Not open for further replies.
Back
Top