Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Nitpick: remember that the peak is "over 6GB/s", not exactly 6.
With 6.5 you end up with 4.05 versus 4.00, and 4.38 versus 4.33. And it's just napkin math to give some ballpark of what's happening with compression ratios.

Not worth the nitpicking considering the comparison is against 22GB/s peak.
 
I would think -- if they were to use a V-shape / split heatsink design -- that cooling two separate masses which are each drawing roughly ~50% of the heat away from the die would be easier than cooling a single mass which draws ~100% of the heat away.

I'd assume that as you increase the heat in a single area it would be exponentially harder to cool, so splitting it off may be more efficient.

I could be totally wrong however and it may just equalise and prove fruitless.. anyone more versed in thermodynamics care to interject?

Edit: assuming my prior assumption is correct (it probably isn't, lol..) I'd think it would be most effective to have each heatsink in its own sealed off "chamber" to avoid any sort of crossover. Or perhaps it may just be a case of outweighing such an effect by maintaining enough airflow.
 
Last edited:
If the devkits are designed to be stacked, and work effectively with other kits to the sides, it might be that the only way to create an large enough intake area from the front is to use the "V" shape.

There. That's it: my final speculation! It's to do with frontal intake area for stacked devkits!

A split heatsink with heatpipes (Sony's traditional weapon of choice) leading to two large, finned areas behind the sides of the "V" would make sense ... at least to me. :-?

Final retail units should have more flexibility with side / front / top intakes than the devkits, and I don't expect the deep V to make it to final retail units.
 
If the devkits are designed to be stacked, and work effectively with other kits to the sides, it might be that the only way to create an large enough intake area from the front is to use the "V" shape.

There. That's it: my final speculation! It's to do with frontal intake area for stacked devkits!

A split heatsink with heatpipes (Sony's traditional weapon of choice) leading to two large, finned areas behind the sides of the "V" would make sense ... at least to me. :-?

Final retail units should have more flexibility with side / front / top intakes than the devkits, and I don't expect the deep V to make it to final retail units.
I agree with all of it, but I think there's still a small chance for the V making it.

I should have posted this here instead of the ps5 thread, the best reason I can find for the V is to avoid the inlet being ducted too long and thin between the fan and the inlet, the other is to avoid having the in/out right besides each other, and the hot air needs to leave as soon as possible. These problems are actually conflicting with each other.

This is kind of overkill (could deal with something like 300W) but it could be a scaled down equivalent. The V allows the airflow to enter the two fans at the perfect angle to reduce as much turbulence as possible. No long ducting, no intake too close to the hot outlet. Still gets front-to-back airflow if something stacked on top, less noise if nothing stacked, or vertical.

ps5.png
 
Last edited:
The architect for the Series X gave a >6 GB/s throughput for the decompression block, though the decision to not use that as the official number seems to indicate it's not common.

They probably went with what's attainable guaranteed instead of the max rates possible in ideal conditions. We probably get those ideal numbers also closer to release.
 
I have been trying to picture in my head a extreme case of in game streaming need and I am in a high risk of making wrong assumptions. Here is the scenario. I am standing on top of a mountain in a nuclear blast proof bunker watching out on a big valley with highly detailed nature, a lake and snowy mountain tops with dense forest below. This scenario is say 12GB (removed a bit for OS) data in front of your eyes. A hydrogen bomb falls down from the sky explodes in the middle of the valley. In a second the lake vaporizes, all the trees in the surrounding forest are shattered out of the center and bursting in to flames and finally the snow blows off the mountain tops. So in a very short time the whole beautiful valley is transformed to a burning ash filled hell. Is this even possible? not on HDD, but possible withe fast I/O SSD? Do you need to use a smaller amount of ram for the scene so the whole transformation is done in ram?
In a other way maybe 12 GB scenario in front of you and a 12 GB scenario behind you and do a 180 degree turn would be even worse, not possible? I am way out because I have no clue how to achieve a scenario for high streaming needs? (I can take this if this is the answer, but would appreciate to understand a bit more and be aware of limitations and possibilities :D)
 
Last edited:
You can do that without storage if you have fast enough processing to be able to procedurally modify the scene. Process it all with physics and use real-time procedural texturing and gas/flame simulation. There's never any one single solution and such questions can't be answered. ;)

Looking at the specifics of your '12 GBs behind, 12 GBs in front', you couldn't load 12 GBs in one second of turn time. However, you'd never need 12 GBs of data and if you're using that much, you're doing it wrong. The total amount of texture detail you need on screen is a few megabytes if textured efficiently. LOD will reduce the geometric complexity and texture fidelity. A 12 GB scene using 12 GBs of VRAM on a PC GPU will be a game caching loads of models and textures which aren't being drawn. You'll probably have, I dunno, random guess for illustration, 4 GBs of active assets, a few GBs of render buffers, and some working datasets. I expect efficient next-gen games to be effectively freed from storage limitations and to able to stream whatever assets they need pretty much on demand. Efficient rendering means only need as much data to populate the pixels on screen, which really isn't that much. To date, we've had to work with huge inefficiencies, storing entire models and textures for rendering a small portion of the screen and preloading content for when it does become visible. The more we move away from that, the less RAM and storage speeds we need.
 
I think it will be possible. During the white out or whatever you want to call it of the explosion it could load the next scene of destruction into ram.

Normally it would just be a recorded cutscene.
 
I think it will be possible. During the white out or whatever you want to call it of the explosion it could load the next scene of destruction into ram.

Normally it would just be a recorded cutscene.

You don't need to load anything. After the initial white out, fade to black as your retinas melt, show a slowly decreasing health bar and then game over screen.

Shifty Geezer's post may have been more constructive... :D
 
I prefer to believe to RedGamingTech or MoreLawisDead who say the contrary and suppossely have also dev friends, not cheered by Tim Dog, Tom Warren and Brad Sams...
THIS IS NOT A BASELESS RUMOUR THREAD.

That is, credentials aren't of any interest here. This is not a thread for discussing who is and isn't a reliable source. No dev/source should be quoted in this thread unless they are bringing specific technical data to the discussion, and the relevance of their data should be talked about beyond "console X is better/worse than console Y".


Failure to limit oneself to technical discussion on the hardware in the next gen machines and how they work will see posts removed and possible thread bans.
 
So, where do we have the baseless rumour thread, where these devs favoring one platform over another belong?

Edit: it isn't my intention of sharing that link where the GG dev thinks the xsx is more powerfull in another thread as it is a sensitive subject.
 
I don't know that we want such a thread. I don't see the value in a thread where people argue over which sources are reliable and which aren't.

What is it in that comment that is worth discussing? If you want to know about potential ray-tracing performance differences, say, start a thread asking that question, link to the post, mention that it's a hypothetical discussion on the possibilities of the remark and engage in a discussion on what MS may be doing differently. Something like that. Discuss ideas and not the validity of sources.

An insider trumpeting/deriding a platform doesn't in itself generate positive discussion.
 
Looks by all accounts that the ps5 case design will be very interesting indeed.
A retail version of the V case (toned down to look prettier) is what I’m expecting.
how else are they going to get the chip running at speed?
 
You guys are the experts so i thought id ask excuse the tech illiteracy

Ive heard people talking about CPU IPC gains from current gen to next gen(or jaguar to zen architecture) in very weird terms. Like 2x and 3x and so on. What exactly are these kinds of multipliers based on regarding the jump we can see in cpu power and actual performance gains in games?

I know as basics that jaguar for example doesnt have SMT in the current machines...but zen 2 in next gen does with 16 thread utilization....ive heard things like 10 to 15% performance improvement by using SMT as well..

I dont know if im asking this right....like how does IPC gain and things like multithreading combine into a multiplier like that like ive heard around? Like what does that actually mean in practice?
 
I dont know if im asking this right....like how does IPC gain and things like multithreading combine into a multiplier like that like ive heard around? Like what does that actually mean in practice?
ooof, this gets complex and way above what I know.
I think most poeple just look at IPC as instructions per clock. So a simple way to calculate how much work you're doing is just instructions per clock * clock speed. And just assume that a greater IPC means more throughput.
But it may not, it just might be simpler instructions for instance. So a well designed complex set of instructions to do something will result in more effective work with less IPC.
Perhaps AVX512 might be a good example if we're just talking about how different instruction sets can lead to different outcomes. AV512 does a but load of work, and will dramatically decrease your clock speed to do it.

I suspect though Zen 2 has come far enough along that the increase in IPC is also resulting to an increase in effective workloads, and I think people figure this (metric) through benchmarking.

As for multi-threading. Normally when your CPU thread hits some sort of stall, it's go to wait for that stall to finish before it can finish it's work. So in this case, hyper threading allows it to switch to another thread to do work there, and return to the original thread when the stall/block is done. So in this case the improvements really just comes from how long 'stalls' will be, you're just getting more efficiency out of your cores really, by removing as much downtime from thread stalling. So the gains here could be massive for say, standard CPU/OS side of things in which we are running and juggling multiple processes competing for resources. To very minimal, for a well optimized program should have very few stalls. Once again, you'd have to benchmark something to really see the difference with Hyper Threading in a game situation.

The honest TLDR; everything is just marketing until we see some solid benchmarking and how that benchmarking changes as code is optimized. So it may start somewhere but end somewhere else at a later time.

so all the marketed features need benchmarking, the SSDs need benchmarking, etc.
 
Last edited:
So, where do we have the baseless rumour thread, where these devs favoring one platform over another belong?

Edit: it isn't my intention of sharing that link where the GG dev thinks the xsx is more powerfull in another thread as it is a sensitive subject.

We all know the XSX is more powerful if you look at the paper specs at least regarding TF ;)
 
Status
Not open for further replies.
Back
Top