Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Sounds complicated. Will that be another headache for developers ?
Yeah but not as much if it's mostly the OS using the lower speed region.
Dont suppose they'll be able to optimize the OS/Dashboard use down to just 2GB ram? Maybe too many buffers still in use for video and image sharing or streaming. Or would they use any freed up memory space and maybe more for reading from the host-controlled SSDs? Or does it read directly from SSD to program address space?
Also any app can be swapped to the SSD while gaming, not sure what need to continue to run which require much memory. I'm looking at what an Amiga could do with 256kb, I would have thought 1000x more would be enough for everybody, but no, even 10000x doesn't seem to be enough for an OS these days. Darn kids, enough is never enough.

Maybe there's a point during development some exec will say all right let's go with 20GB.
 
Last edited:
Cost is the only reason I can think of. It would still be unified because it’s a contiguous physical address space (and virtual, for that matter). You just have some memory accesses they won’t get the benefit of parallel access across all modules, lowering your theoretical peak data throughput.

Doesn't that happen anyway? Or has something changed? Isn't AMD apu's unified memory pool still segregated into cpu cacheable memory and local memory (VRAM)? Local memory has maximized throughput while cacheable memory does not and thus throughput is significantly slower for both the gpu and cpu accesses.

In fact, the old fusion apus had three different memory segments;

cacheable
uncacheable
local

all with different throughputs due to granularity of interleaving.
 
Last edited:
I think so. Must be worth the savings though *shrug*

I think "headache" is a bit much. The OS reserve is already grabbing 3 of the 6GB of lower-spec RAM. If the system defaults to directing all GPU operation's writes to the 10GB of "fast" (unless otherwise specified) RAM and all CPU operations writes to the "slow" 3GB of RAM (save GPU setup operations and unless otherwise specified), maybe developers would only have to manually direct a portion of data operations.
 
Question... I noticed on another forum someone mentioning that the PS4 Pro GPU, with it's butterfly like design used GCN1 on one wing to match the architecture of the PS4 for backwards compatibility. The other wing being a custom Polaris with Vega features (GCN2). Wondering if anyone can tell me if this is actually a true statement. I do not recall that being the case. Also the possibility that Sony would do something similar with the PS5 GPU was discussed. Having a similar butterfly design using RDNA1 on one wing to help support backwards compatibility and RDNA 2 features on the second wing. Curious if this train of thought has any merit at all or if it's nonsense, thanks.
 
Question... I noticed on another forum someone mentioning that the PS4 Pro GPU, with it's butterfly like design used GCN1 on one wing to match the architecture of the PS4 for backwards compatibility. The other wing being a custom Polaris with Vega features (GCN2). Wondering if anyone can tell me if this is actually a true statement. I do not recall that being the case. Also the possibility that Sony would do something similar with the PS5 GPU was discussed. Having a similar butterfly design using RDNA1 on one wing to help support backwards compatibility and RDNA 2 features on the second wing. Curious if this train of thought has any merit at all or if it's nonsense, thanks.
No, it's not. Unless someone shows AMD having units capable of 4:1 FP16 that could never work
 
Is it technically possible Xsx raises to 13TFs without much affecting the noise level and BOM?
technically anything is possible.
but that may blow it out of price range.
the simplest answer is to just keep the better yield chips that can have a higher clock and not break under worse conditions.

but the system is designed with price in mind, and thus the system is designed supporting the worse silicon they are willing to allow in the product.
 
Is it technically possible Xsx raises to 13TFs without much affecting the noise level and BOM?
Impossible without affecting the BOM, by how much is anyone's guess. Just like the rumored 2ghz PS5, without seeing their parametric yield distribution we can't know if there's a way to ditch a bit more chips that don't pass to raise the clock, or how much overvolting would keep it at a similar yield (which would probably cost more with esoteric thermal management).

They announced 12TF already, they would have decided on this with everything considered. So changing this afterwards would be a weird move.
 
Question... I noticed on another forum someone mentioning that the PS4 Pro GPU, with it's butterfly like design used GCN1 on one wing to match the architecture of the PS4 for backwards compatibility. The other wing being a custom Polaris with Vega features (GCN2). Wondering if anyone can tell me if this is actually a true statement. I do not recall that being the case. Also the possibility that Sony would do something similar with the PS5 GPU was discussed. Having a similar butterfly design using RDNA1 on one wing to help support backwards compatibility and RDNA 2 features on the second wing. Curious if this train of thought has any merit at all or if it's nonsense, thanks.


What? :oops::runaway:
 
Question... I noticed on another forum someone mentioning that the PS4 Pro GPU, with it's butterfly like design used GCN1 on one wing to match the architecture of the PS4 for backwards compatibility. The other wing being a custom Polaris with Vega features (GCN2). Wondering if anyone can tell me if this is actually a true statement. I do not recall that being the case. Also the possibility that Sony would do something similar with the PS5 GPU was discussed. Having a similar butterfly design using RDNA1 on one wing to help support backwards compatibility and RDNA 2 features on the second wing. Curious if this train of thought has any merit at all or if it's nonsense, thanks.

The thought crossed my mind when Fritz/DF showed a die shot of the APU in the Pro, which clearly shows that both wings are asymmetrical in terms of their physical layout.

APUComp.jpg


I was either an area optimisation, or they are indeed different CUs.
 
Cool patent from SIE here. Looks like dynamic amount of processing power could be assigned to a streaming user. A la Stadia and Stadia Pro?

http://www.freepatentsonline.com/y2020/0084267.html

In one embodiment, a high-speed, low latency system in which multiple servers optionally composed out of game consoles are interconnected is described. This technology is the building block for an elastic compute architecture, which, for example, is utilized for gaming purposes including cloud gaming. The embodiment allows clustering of compute nodes, e.g., game consoles, virtual machines, servers, etc., to obtain a larger amount of compute capacity otherwise available. The way the compute nodes are tied together allows for ways of writing game engines, where for example a game, either a single-player game or multiplayer game, runs across many compute nodes. The system provides a clustering technology that allows for different types of games.

This one is even cooler though. An attachable tactile feedback unit that can actually deform a flexible layer to provide feedback (i.e. real-time Braille). The unit could also be an actual screen.

http://www.freepatentsonline.com/y2020/0078674.html
 
Last edited:
Sadly physical hardware is going to be as relevant as physical controls at some point. My kid games on tablet with virtual controls. I just can't....

I'm 100% sure that these generations that started to use phones/tablets from +1 year old and up will have major finger/joint problems as teenagers or adults.

Just crazy to see kids that can barely walk to use/play on phones and kids playing touch screen games for hours with fingers on weird angle.

imo kids (or adults) should not be allowed to use touch devices for gaming more than 30-60mins / day or not even that for youngest ones. I know it is impossible to stop as kids are just as hive minded as adults, if it is popular to play with phone then they play with phone.

But it is what it is
 
Is it technically possible Xsx raises to 13TFs without much affecting the noise level and BOM?

I mean, possibly. The final speed will be decided late based upon yields and noise/heat. Every chip has to meet the same baseline performance so games all work the same. The lower you set that baseline the more good chips you get because they're fast enough, the lower your costs. But obviously if you want enough performance there has to be some sort of cutoff.

Then after that part is decided you start putting some units together and seeing what real world clockspeeds you can achieve versus how hot the console gets, and how much noise the system makes. You set that to whatever balance it is you want and that's when you end up with final clocks.

A perfectly relevant example is after the PS4 and Xbox One were announced and shown off, and MS realized it was very much the loser, they retested all their clocking and found they could safely clock their CPU slightly higher (50Mhz higher I think, 6.25% increase). Thus the Xbox One (base) has a slightly faster CPU than the PS4.
 
Status
Not open for further replies.
Back
Top