Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Probably not. Not even UE5 tech demo was like that, and that's already going to be very common if happening at all in full production games. Maybe if there will be a 'crysis' again, a PC only game needing 3090 level hardware or something.

so we have to wait for 40tflop machines then..
 
so we have to wait for 40tflop machines then..

No idea, crafting games like that not only requires advanced hardware i think :p Seeing the texture quality, probably 24gb of close to 1TB/s wouldn't be a luxury.
Those are tech demos for most likely 2080TI/Turing Titan products, in non-game environments. Probably produced like a movie/CGI.

Closest is star citizen probably in pure technical achievements. We will be happy with Cyberpunk 2077, forbidden west, next spiderman, Pragmata etc graphics.
 
So I was thinking about the cooling patent and the fact that we could see 2 stacked chips (5c and 5d) inside the main chip (this is a modified version of it):
The cooling patent and the stacked chip idea run into similar challenges, if we assume the PS5's SOC is drawing the kind of power we think a console of this class should draw.
The diagram has a chip on top of another chip, and additionally appears to encase both in plastic, which isn't encouraging in terms of thermal transfer.

Looking at AMD's other large SOC's, what is below the SOC is devoted to delivering signals and current in and out of the chip, and everything above the chip is dedicated to drawing heat out. A chip on top of that chip inside of a plastic casing defeats the purpose of the upper side, and drilling out a large fraction of the underside cuts into the current and signals going in, weakening the delivery side that supposedly necessitates such a solution.

There are 3D integration methods that could produce something that could be stacked, although it would be something like an active interposer with on-chip memory under the main SOC in order to give the logic layer better thermal access to the cooler. However, there are very few examples of this being produced for an SOC, and not for an SOC in this performance/power class. Cost and limited manufacturing capacity for a complex integration method for such a large chip could make this difficult to reconcile for a mass-produced console.

The double-sided cooling patent seems to be concerned with a chip with limits in how it can be placed or space taken up by other components. Given what we've seen of the dimensions of the PS5, I don't see space being high on the list of constraints for the main box. Power-wise, I'd be curious if there's a rough ceiling in how much power such an arrangement could be fed given a lot of the footprint needed to bring in the power would be blocked by the heatsink.
It seems to make more sense for something with more modest power needs and some kind of space constraint. Perhaps a peripheral or lower-power component in a cramped corner of the box might be more compatible.


What's interesting is that Cerny already designed a very innovative chip-on-chip design for Vita with the memory stacked on top of the APU.
Although the Vita also provides an example of the level of power dissipation that is permitted by this kind of stacking.

So as they already have experience doing so, I won't be surprised if they do something similar on PS5, even if it's not done on the main APU. Now what part of PS5 could use the patent design? Well, the I/O complex is comprised of several chips (that could be stacked to save space) and it'll probably need active cooling so it could be a good candidate for that design. The flash controller could also be a good candidate (the I/O complex seems to be part of the main APU?).
From the diagram, everything that isn't system memory or the flash controller+flash is inside of a box called "main custom chip", so my interpretation is that there's no separate set of chips for the I/O complex.

We know the system memory is a type that isn't stackable, and I'm not picturing what's left to stack for the flash controller. The NAND chips would probably be commodity chips rather than bespoke stackable units.

It's not clear to me, at least with the current technical disclosures, why they would need this for the PS5. I'm still eagerly anticipating the teardown for the custom liquid metal barrier alone.

It would be interesting to see if it has such an implementation, although I'd be curious if Sony would indicate why it would need to do this since it seems nothing else in this class of hardware has required it.
 
No idea, crafting games like that not only requires advanced hardware i think :p Seeing the texture quality, probably 24gb of close to 1TB/s wouldn't be a luxury.
Those are tech demos for most likely 2080TI/Turing Titan products, in non-game environments. Probably produced like a movie/CGI.

Closest is star citizen probably in pure technical achievements. We will be happy with Cyberpunk 2077, forbidden west, next spiderman, Pragmata etc graphics.

The things that make that demo look nice are the detailed geometry, lighting and staging. It was running on a 1080Ti at 60fps on UE4. It's not doing anything technically beyond next gen.
 
Is it reasonable to expect this level of graphics from PS5 and XSX late into the gen?

Probably depends on what you mean by level of graphics.
Texture and model detail? Yes, we saw the UE5 demo for the PS5 using Quixel libraries with similar results.

I think the main reason you won't see this in a full fledged game is the sheer amount of man-hours/man-months necessary to build each second in that demo, and then make it interactive while still looking that good.





In short: the new consoles will have the GPU power to pull that off. It's just that we gamers wouldn't be willing to pay the necessary $500 each copy, for a game that looks that good.
The development tools need to evolve by another couple of generations to make a game looking like that within budget.

However, if you look at some games like Kena you'll see we're getting there!
 
The things that make that demo look nice are the detailed geometry, lighting and staging. It was running on a 1080Ti at 60fps on UE4. It's not doing anything technically beyond next gen.

Tech demo's look beyond anything else, because their just that, tech demos. Those graphics ingame would be beyond next gen. Not even XSX/PS5 can pull all this quality off while ray tracing full time at that at 4k 60fps. UE5 tech demo on PS5/RTX2080q laptop is a tech demo too, an interactive scripted one.






https://www.youtube.com/watch?v=nw-46cZ0Utk
 
Last edited:
It would be interesting to see if it has such an implementation, although I'd be curious if Sony would indicate why it would need to do this since it seems nothing else in this class of hardware has required it.
Agree. We’re looking at standard GDDR6 and a mere 8MB of L3. I don’t see a performance case for anything else off chip in that scenario.
 
I think the main reason you won't see this in a full fledged game is the sheer amount of man-hours/man-months necessary to build each second in that demo, and then make it interactive while still looking that good

Not having to consider / optimize for the number or size of assets make that production or procedural placement so much simpler, therefore cheaper. It might take a while to work out that pipeline but I think they'll get there. For landscapes at least.

The part that can't be replicated is staging of interactive .vs non interactive scenes. The cinematography of 1st/3rd person game camera is away going to be pants next to a cutscene. It buys you so much in an image that isn't actually a technical win at all.
 
I have seen some discussion about XSX/XSS BC support and that it will load stuff faster when running games in BC mode etc. Which seems logical due to the speed of the SSD and more RAM and what not.
But I am trying to understand a bit in regards to the speculation of SFS, if I understand it correctly it will read the correct portions of the a texture directly from the SSD. But how can it do that for the old games? Are not the assets packed into "archives" and the game knows which archive to load to get access to an asset. Will the SFS read the archive, unpack it and the "copy" the relevant portion of the correct file in the archive?
I assume that the game code, requests either an archive or the asset and then something loads the archive and unpacks it and puts the right asset into its location?
MS would have to patch all games to make use of SFS in BC then? Or do one expect that the archives are unpacked already in ram and then read from there?

Just curious after reading seeing the Spiderman GDC video thing where they talked about packing up assets and multiplying certain things into multiple archives and storing them across the disc/hdd.
 
I think the main reason you won't see this in a full fledged game is the sheer amount of man-hours/man-months necessary to build each second in that demo, and then make it interactive while still looking that good.

wait a minute. you are saying these consoles are powerful enough to make that happen but it would take too much dev time? are you serious? i always thought graphics were a hardware boundry. I know nothing about how graphics works or anything like that, but what is actually difficult to make? aren't these just assets loaded in? and you take on assets on moviing objects and that are now interactive?
 
I have seen some discussion about XSX/XSS BC support and that it will load stuff faster when running games in BC mode etc. Which seems logical due to the speed of the SSD and more RAM and what not.
But I am trying to understand a bit in regards to the speculation of SFS, if I understand it correctly it will read the correct portions of the a texture directly from the SSD. But how can it do that for the old games? Are not the assets packed into "archives" and the game knows which archive to load to get access to an asset. Will the SFS read the archive, unpack it and the "copy" the relevant portion of the correct file in the archive?
I assume that the game code, requests either an archive or the asset and then something loads the archive and unpacks it and puts the right asset into its location?
MS would have to patch all games to make use of SFS in BC then? Or do one expect that the archives are unpacked already in ram and then read from there?

Just curious after reading seeing the Spiderman GDC video thing where they talked about packing up assets and multiplying certain things into multiple archives and storing them across the disc/hdd.

I don't believe SFS plays a factor in games not coded for it. The only way would be if the amazing BC Technology Developers at Microsoft found some magical means of injecting it into their VM/Driver layers for older games. Even with that, I don't think it would assist on loading of assets but perhaps the eviction of unused assets from memory?
 
I don't believe SFS plays a factor in games not coded for it. The only way would be if the amazing BC Technology Developers at Microsoft found some magical means of injecting it into their VM/Driver layers for older games. Even with that, I don't think it would assist on loading of assets but perhaps the eviction of unused assets from memory?

Sounds reasonable, lets hope you are wrong and MS has done some magic :)
 
wait a minute. you are saying these consoles are powerful enough to make that happen but it would take too much dev time? are you serious? i always thought graphics were a hardware boundry. I know nothing about how graphics works or anything like that, but what is actually difficult to make? aren't these just assets loaded in? and you take on assets on moviing objects and that are now interactive?
Budgets are a big reason why some games look so much better than others. Talent is another factor. But if you’re pushing the limit for the best looking stuff; you’re likely pre-computing as much as possible and that takes a lot of time and labour to do.
 
Budgets are a big reason why some games look so much better than others. Talent is another factor. But if you’re pushing the limit for the best looking stuff; you’re likely pre-computing as much as possible and that takes a lot of time and labour to do.

The fact that some people still think that there is a “Good Graphics” button, and that things just happen without thinking about the absolutely immense human effort and talent, my mind boggles.
 
Status
Not open for further replies.
Back
Top