Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
But when is it assigned, AMD? When you make the asset, or at the end of your vertex shader operation. Bah! :mad:
At runtime. That's the only point when it's meaningful. The idea is to evaluate a scene and simplify the parts that need the least detail, which will change on scene conditions. So you may have a wall that's brightly lit in the day-time, but as evening encroaches, one side is deep in shadow and practically black for the player, at which point fancy material shaders are redundant.
 
At runtime. That's the only point when it's meaningful. The idea is to evaluate a scene and simplify the parts that need the least detail, which will change on scene conditions. So you may have a wall that's brightly lit in the day-time, but as evening encroaches, one side is deep in shadow and practically black for the player, at which point fancy material shaders are redundant.

Blimey. I think got muddled with the latter part of my post, because I was working on a level height map in Unity (with baked in stuff) when I wrote that. My head went off the rails. I was talking garbage. That would explain why I made less sense than normal to @iroboto. Have tried to clean up my mess.

Yep, VRS is about runtime. What I think I had originally meant to ask when I started my post was "are AMD talking about feeding a predetermined list of shading rate changes determined before the the call is made (potentially better than Tier 1, but not necessarily matching MS's Tier 2), or making decisions on shading rates based on dynamic data created during a particular call?

I've have enough of computers today. I hate computers. I'm going to go and play some computer games.
 
So something of a general tone, even if the leaks were from the Microsoft side, not sure if there's any interest in discussing them here...

One concept that may be of interest is the state of the tools and hardware used for game development.

There were hardware level bugs in earlier nonfinalized kits that impacted performance levels in certain areas. As in all development cycles, early hardware is not exact indicators of performance and as things get worked out, the devkits go through their own iterations before hitting final revision.

Microsoft is still revising their Gamecore development environment. They are fixing bugs and providing updates. What the other leaks do not indicate is how much of a difference is there between last-gen dev environments and NextGen dev environments. It may seem to be a large difference if the Xbox One devkits are still based around updates from the time of OneX revision, so circa 2016 to 2017 for an overall dev environment. This new GameCore is likely based on a more modern 2020 overall environment.

There is also the concept of using one hardware model devkit for multiple target level releases.

Would Microsoft and Sony simply provide a PS5 or SeriesX hw devkit and rely on software to allow for developers to target PS4, PS4Pro, PS4 and SeriesX, Lockhart, OneX, and OneS with a single hw platform? I'm not sure how they would be able to hobble the IO portion to emulate the portable rust spinner hard drives used in last-gen. Do you somehow apply quota IO system limited things to 30 MB/s and latency of seconds?
 
Would Microsoft and Sony simply provide a PS5 or SeriesX hw devkit and rely on software to allow for developers to target PS4, PS4Pro, PS4 and SeriesX, Lockhart, OneX, and OneS with a single hw platform? I'm not sure how they would be able to hobble the IO portion to emulate the portable rust spinner hard drives used in last-gen. Do you somehow apply quota IO system limited things to 30 MB/s and latency of seconds?

You can do this, Apple has great tools to simulate shitty I/O, particularly poor internet and network bandwidth - this is one of the easier things to simulate as you're just adding delays into a bunch if APIs.
 
I doubt its in sonys plan as the whole priority of MS is apis like direct x...i mean xbox is literally named after it. Sony likes a little more low level access to their specific hw hence why things like bc became an issue later
 
You can do this, Apple has great tools to simulate shitty I/O, particularly poor internet and network bandwidth - this is one of the easier things to simulate as you're just adding delays into a bunch if APIs.

So that could feasible cover the drive IO if desired.

But what about the cpu portion, is there a behave like little Jaguars setting on the mighty Zen2 CPUs used for nextgen? I know they can turn off SMT (at least its exposed as option on MS side). Maybe turn off some of the cache. I dont think there's a low speed mode, I mean you can set low cpu clock but it's still gonna be more performant. Maybe its compiler tricks again?

Perhaps the NextGen devkits wouldnt be able to give exact performance levels of last gen, but could be used as general proof of concepts and debugging. So they'd still need a last-gen hw kit.
 
With SSD being able to load data really fast on RAM, developers will obviously use to load data not just quickly but also more frequently. doesn't this take a toll on the RAM, will we see more incidents of RAM going bad and consoles bricking?
 
With SSD being able to load data really fast on RAM, developers will obviously use to load data not just quickly but also more frequently. doesn't this take a toll on the RAM, will we see more incidents of RAM going bad and consoles bricking?
can you thoroughly explain your thought process here on how RAM will brick more?
please begin with why RAM bricks to begin with, what percentage to expect on standard yield, and then move onto how SSD loading into memory (more) will cause more bricking.
 
Current memory cells used by devices needs to refreshed all the time, otherwise the bits in its cells wont hold and they get wiped out. This happens every so many μs, yes, microseconds. I dont get the concern over how often ram is used.
 
can you thoroughly explain your thought process here on how RAM will brick more?
please begin with why RAM bricks to begin with, what percentage to expect on standard yield, and then move onto how SSD loading into memory (more) will cause more bricking.
I think he's asking a question..not stating an opinion. lol
 
With SSD being able to load data really fast on RAM, developers will obviously use to load data not just quickly but also more frequently. doesn't this take a toll on the RAM, will we see more incidents of RAM going bad and consoles bricking?

What’s 2.4-9 GBs of bandwidth to memory that’s designed to max out at 448-560 GBs?

The GPUs will put more pressure on VRAM than the SSDs could ever accomplish.
 
I recall reading about their work for XBO and PS4. Surprised none of this (additional RDO compression work) was developed until now?

People use their own RDO encoder or binomial encoder. People use Kraken because it is so fast to decode and with a bit better compression ratio than Zlib, it goes faster on Jaguar CPU than the hardware Zlib decoder on PS4 and XBO.
 
https://gamingbolt.com/games-with-m...from-ps5-and-xbox-series-x-ssds-dysmantle-dev

“Streaming world content is one of the big things that will get better. You can have a more detailed world as you can stream data faster from mass storage into the GPU. You can also move faster in the game world as the hardware can keep up better. Dysmantle has also a streaming system for the world, but we are probably not exceeding even current-gen capabilities in that regard. Games with massive, extremely detailed worlds will benefit the most.”

Töyssy added that as far as multiplatform developers are concerned, they will most likely scale data in such a manner that their games work the same on both PS5 and Xbox Series X in spite of any differences in hardware.

“Multiplatform games will probably find a suitable bandwidth and tools to scale the amount of data so that it works on all platforms,” he said. “It might be that texture detail level can be varied according to available bandwidth, for example.”
 
Can someone that understands better than I take a look at this because what I'm reading seems to say that the clock boosting comes with one-offs for graphics, effects, etc.

https://www.psu.com/news/kitatus-an...up-a-lot-of-different-opportunities-for-devs/

Kitatus and Friends Says PS5 Boost Clock ‘Opens Up A Lot Of Different Opportunities’ For Devs

Speaking with WCCFTech about the design of the PS5, Shan noted how Sony has given developers freedom to utilise the console’s meaty innards as they see fit.

I think it’s a bit like when developing for PC, where you have access to all that different hardware and you can kind of tune things based on your needs. And what Sony are essentially saying are, here’s your tool of options, you can absolutely throttle to the max. We prefer if you didn’t, but if there’s like a fringe case where you’re just off that tiny bit of performance you need, we will let you squeeze a little bit extra.

It also opens up a lot of different opportunities such as, say that you wanted to take rendering for some specific thing like a particle and you wanted to run it through something like the CPU for a specific cutscene, that’d be possible now whereas historically you had to be really careful that you didn’t flood a specific thread.
 
Can someone that understands better than I take a look at this because what I'm reading seems to say that the clock boosting comes with one-offs for graphics, effects, etc.

https://www.psu.com/news/kitatus-an...up-a-lot-of-different-opportunities-for-devs/

Kitatus and Friends Says PS5 Boost Clock ‘Opens Up A Lot Of Different Opportunities’ For Devs
I think what they he is saying is that, if you have a scene that would be traditionally CPU limited followed by a scene where you are GPU limited, you can divert performance from one to the other where in a fixed clock system you could not. It's a great concept, except that the sustained performance of Series X is higher than the theoretical maximums of PS5, so it's likely that you will get pretty comparable results running that same code on both systems.

We are gearing up for launch now, and the hype machines are in full swing. Expect a bunch of marketing speak and fluff pieces highlighting certain unique qualities of the consoles, even if they aren't necessarily an advantage. I bet if you play a drinking game where you down a shot every time someone from Microsoft says "consistent, sustained performance" in their upcoming event you'll be wasted halfway through.
 
Status
Not open for further replies.
Back
Top