Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
I have to watch it again, I was reading the CC at work. :runaway:
I'm not sure if he actually says the compressed numbers on the stream but it's what Eurogamer etc got from Sony (there were other details in their specs too that Cerny didn't mention)
 
Can someone kindly explain to me on which basis are people online saying that RDNA flops are a different thing compared to GCN flops? (I know it's a different architecture but I am out of the loop)
Thanks
 
Can someone kindly explain to me on which basis are people online saying that RDNA flops are a different thing compared to GCN flops? (I know it's a different architecture but I am out of the loop)
Thanks
It means that given "same" configuration, they would offer same amount of theoretical FLOPS but RDNA would be faster in practice. Meaning it gets more real work done due differences in architecture.
 
It means that given "same" configuration, they would offer same amount of theoretical FLOPS but RDNA would be faster in practice. Meaning it gets more real work done due differences in architecture.

can you please link me any AMD source about this? as I said, I'm out of the loop. Thanks
 
I'm not sure if he actually says the compressed numbers on the stream but it's what Eurogamer etc got from Sony (there were other details in their specs too that Cerny didn't mention)
Stream at 17:30

The hardware decompressor can do 5.5GB in, and 22GB out depending on compression ratio. Obviously it needs exactly 4:1 compression to achieve 22GB/s, and he only provided 8-9 as a typical figure for kraken compressed data. It was not a spec. The hardware spec is 5.5 to 22 depending on data compressibility.
 
Do you know what do i want bc for?. for nothing, as seen the two alternatives i will go for the system with better 4K capabilities and that wont allow me to play my PS4 games.

Rolling generation is more a thing then ever now. XSX will do HDR (real HDR) to BC games (gears5). Has Cerny mentioned something like this today?

. I am equally prepared for PS5 to have the same ballpark performance as competition as well as to be significantly slower because of memory bandwidth.

Even if it could sustain 2.23ghz and at the same time sustain 3.5ghz cpu, there's still the same amount of TF difference as there was with the One X vs Pro (1.8TF diff there).
 
Kind of mooted a bit considering games will only be able to use 13-13.5 GB ram?
Not at all, and in fact it's the opposite, it helps a lot with the limited ram.

I was suggesting this before, I'm happy it was in the presentation...

It could be fast enough to load on-demand based on the player viewing frustum, instead of just preloading/releasing sections based on player distance. The ability to do this is all or nothing, it saves half the memory or not at all because if you turn around 180, you basically have to reload the totality of the world data in the view. It could be fast enough to practically load between frames as you turn. This doubles the detail level possible from the amount of ram.
 
Even if it could sustain 2.23ghz and at the same time sustain 3.5ghz cpu, there's still the same amount of TF difference as there was with the One X vs Pro (1.8TF diff there).
if that's the case then the 3D audio will be coming from the PS5's refrigeration. It's going to sound like a liquidizer.
 
Even if it could sustain 2.23ghz and at the same time sustain 3.5ghz cpu, there's still the same amount of TF difference as there was with the One X vs Pro (1.8TF diff there).

You can’t look at the absolute number though, it’s all relative.

1.8 is 42% of 4.2, and we all know that this is more or less the difference between PS4 Pro and XbX.

1.8 is “only” 17% of 10.2, so the difference here is much, much less, and in the world of diminishing returns we’re really not talking about much.

I don’t care about BC that much, but I really do hope that Sony can come up with a standard HDR system like MS though. This really shows that MS are thinking way ahead here.
 
Not at all, and in fact it's the opposite, it helps a lot with the limited ram.

I was suggesting this before, I'm happy it was in the presentation...

It could be fast enough to load on-demand based on the player viewing frustum, instead of just preloading/releasing sections based on player distance. The ability to do this is all or nothing, it saves half the memory or not at all because if you turn around 180, you basically have to reload the totality of the world data in the view. It could be fast enough to practically load between frames as you turn. This doubles the detail level possible from the amount of ram.

Is it really fast enough to do that though? You're looking at around 352 MB/frame at 60Hz.

Edit: And that's assuming that all of the data you need is stored with the absolute maximum compression ration. And that's a full 16ms read, so it requires at least one frame of buffering just to hide the read.
 
You can’t look at the absolute number though, it’s all relative.

True, but say, what does 3TF of RDNA2 equal to in GCN current gen flops? That's not too far away from the then-speculated lockhart console.
It also means more RT processing power, which, seems rather power hungry. MS did a demo of a fully path traced game.
 
stock AMD, same as MS. Cerny saw a game with RT reflections with decent frame rate so forget about lighting and shadows.

Yerah I am thinking if people are correct, raytracing wont really happen this entire console gen as even on PC it's often thought to be too early, affordable hardware like a 2060 doesn't REALLY have enough grunt for raytracing. Which is a bummer since you're looking at 7 years. However pro consoles may be a different story.

Because they decided to go cheap. The chip will be their smallest ever.

I've seen a few people say this (before the reveal, 36 CU will be ~300mm, Sony has never made a console die that small, therefore PS5 cannot be 36 CU) But not necessarily cheapest is what people miss. 7nm is more expensive per die area.

The Series X SOC has similar area to the One X. But I assume is costing MS significantly more (and of course has many more "stuff", transistors, on there).


Having to devote compute resources to 3D audio has always put it first on the chopping block. When push comes to shove I'm sure that'll be the case this generation also, especially if there's enough of a performance delta between ps5, seriesX, and PC where they're having to throw every last bit of grunt to game logic and graphics to make them look roughly the same.

Carmack actually tweeted about this recently.


It then was clarified in the replies it's likely just rebranded AMD TrueAudio which IS on the CU's.


It might be very close to 1/1 in GPU performance due to 22% faster back-end..

What is faster in the back end? Don't almost everything that matter (ROPS, TMU's) scale with CU's?
 
What is faster in the back end? Don't almost everything that matter (ROPS, TMU's) scale with CU's?

AFAIK ROP count is the same (64) on both consoles. SeriesX's runs at 1.83GHz, PS5's runs at 2.23GHz.
 
Is it really fast enough to do that though? You're looking at around 352 MB/frame at 60Hz.

Edit: And that's assuming that all of the data you need is stored with the absolute maximum compression ration. And that's a full 16ms read, so it requires at least one frame of buffering just to hide the read.

Cerny did say in the presentation that if it takes a player 0.5 second to spin the camera behind them that's enough time to load in ~4Gb worth of textures from the SSD, if that's true it's insane........developers won't have to have corridors, use other tricks or level designs to hide streaming any more.....it'll give so much more freedom.
 
Status
Not open for further replies.
Back
Top