Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
Nah. Target the default performance. If someone chooses to use slower flash (assuming the default isn't the slowest!), that's their call. Anyone savvy enough to replace the flash drive should be expected to be able to follow some performance guidelines in the manual, and The Internet will provide lots of advice to look up.

Would NVME be best in that case? Small, low power, and generally better performance. Lower capacity is the drawback, but that shouldn't matter if used in conjunction with an HDD.
 
Nah. Target the default performance. If someone chooses to use slower flash (assuming the default isn't the slowest!), that's their call. Anyone savvy enough to replace the flash drive should be expected to be able to follow some performance guidelines in the manual, and The Internet will provide lots of advice to look up.
Agree. Putting it in the default SKU is sufficient. If you experience pop in because you swapped in an underperforming drive, that’s on you.

I’d want it to be swappable to replace the drive if it started experiencing aging effects. I assume that if devs were targeting it, then the default would be no loading in most games, and there’d be no benefit to a faster (or even larger) drive.
 
Grazy bandwith rumours 500 -700 GB/s Ps5
That's not crazy. That's expected for GDDR6. The only question is how wide the memory bus is going to be.

At the clockspeeds that Nvidia are running the GDDR6 on the Geforce and Quadro RTX cards:
  • 256-bit gets you 448 GB/s for 8GB of memory @ 1 GB per chip.
  • 352-bit get's you 616 GB/s for 11GB of memory @ 1 GB per chip.
  • 384-bit gets you 672 GB/s for 24GB of memory @ 2 GB per chip.
12 - 1GB chips on a 384-bit bus seems like a plausible configuration to me for PS5/Scarlet.
 
For what it's worth, for a long time I had always assumed that rasterization and compute would be the be all go forward and that anytime we would be able to muster up enough hardware power for RT, that there would be so much more power in non RT methods and approximations that real time RT would never be a thing. Granted, true native realtime RT is still no where close. But the architectures and designs have come out for RT hybrids. Given enough time, we look at eventuality and see where things are headed ultimately.

Such a thing could be possible with other unique architectures to solve different problems. We may yet see a re-entrance of those older concepts, but it likely won't be in the cards for a while.

The problem with the rasterizer approach seems to be that it gets harder and harder to find ways to keep pushing visual improvement, and that there are problems, like reflections, that can't be solved in screen space. It also sounds like ray tracing will be far more reliable and require less workarounds from a production standpoint. I think its easy to take the difficulty of rasterizer methods for granted when we just play instead of creating art and designing levels. Not to mention, games have been moving to ray tracing with that compute power for a while. They were just limited to screen space. Full-world visibility testing will also open up possibilities for audio and ai breakthroughs. Unless we want to be stuck with the same games for the next ten years it'd be best if the new consoles had some gpu features to alleviate ray tracing overhead to make ray structures and tracking cheap and push the bottleneck back on to shading/alu and bandwidth.
 
Last edited:
And as technology gets more complex developer friendless becomes ever-more important. Cell took a further unconventional leap forward from PS2 that was sufficiently impactful for developers enough that Sony ruled that out for PS4. If you want to hear it from the horse's mouth, Mark Cerny spoke about this very deliberate decision at Gamelab 2013 in his presentation, 'The road to PS4'.

After his introduction and background the next 20 minutes is spent on how things since the original PlayStation got more complicated and PS4 had to reverse that. He even mentions specific technical options (i.e. the approach that could've been) in the presentation. It's crap quality but fascinating nonetheless.


I like the irony of Mark Cerny singling out Ray tracing as something exotic devs definitely didn't want yet 8 years later I read a tweet from a high profile dev (who "suggested" PS4/X1 have 8GB) say:

Good to see RTX cat out of the bag finally - hope all competitors and next gen consoles to catch up ^^

Umm...
 
I like the irony of Mark Cerny singling out Ray tracing as something exotic devs definitely didn't want yet 8 years later I read a tweet from a high profile dev (who "suggested" PS4/X1 have 8GB) say:



Umm...
Maybe because 8 years later the transistor density increased drastically which allows for dedicated RT hardware without eating too much into the amount of units dedicated to traditional rendering and general compute.

And even 8 years later it's only barely practical in GPUs with enormous die areas and TDPs that would never fit a console.
 
As soon as RTX was announced by Nvidia, multiple devs commented they expected it for next next gen.


Maybe. We'll see. That little exchange you quote comes across as some sort of in-joke? Probably just me being optimistic for once.

ToTTenTranz said:
Maybe because 8 years later the transistor density increased drastically which allows for dedicated RT hardware without eating too much into the amount of units dedicated to traditional rendering and general compute.

And even 8 years later it's only barely practical in GPUs with enormous die areas and TDPs that would never fit a console.

You might well be right but this Ray tracing talk gives me similar feels as 8GB GDDR5 RAM in 2011/12. It was "crazy talk" and never going to happen. Then it did.

Never say never again...
 
Last edited:
Maybe because 8 years later the transistor density increased drastically which allows for dedicated RT hardware without eating too much into the amount of units dedicated to traditional rendering and general compute.

And even 8 years later it's only barely practical in GPUs with enormous die areas and TDPs that would never fit a console.
Yeah I think it is appearing in hybrid form because we now have enough compute to do so, and additional hardware multiplies it's efficiency enough to make it practical.

Cerny was talking about a hypothetical ray-tracing hardware replacing traditional rasterizers. The reference was how the cell caused headaches for ps3 development. Devs needed to code very differently, and it would also have been the case with ps4 if they replaced the majority of compute with circuitry exclusively for RT.

The point is to avoid a situation where games will look like crap unless they rewite the entire engine for this specific new hardware.

This time, the sacrifice looks fine. For the same area, are we talking about maybe 10TF plus RT, versus 12TF without? In contrast, cell was a massive drop in cpu power if you didn't recode the entire pipeline to use the SPEs correctly. We could say cell was the equivalent of having 4TF+RT available next gen instead of 12TF skipping RT. It's nowhere near that bad.
 
Last edited:
I like the irony of Mark Cerny singling out Ray tracing as something exotic devs definitely didn't want.
Well no! 8 years ago they didn't! SD games at :love:0 fps and all noisy? That'd be like asking for 3D rasterisation with the technology of 1982 in a home computer - it'd be completely underpowered and misplaced. Looking forwards, raytracing makes sense increasingly so for both raysterising and, potentially, purely raytraced visuals, plus ray-traced other stuff (AI, audio, things).
 
Well no! 8 years ago they didn't! SD games at :love:0 fps and all noisy? That'd be like asking for 3D rasterisation with the technology of 1982 in a home computer - it'd be completely underpowered and misplaced. Looking forwards, raytracing makes sense increasingly so for both raysterising and, potentially, purely raytraced visuals, plus ray-traced other stuff (AI, audio, things).

LOL @ < 30 fps turning into that becasue you didn't put a space between the < and 3. Reads as "a heart-pumping 0 fps".
 
Maybe. We'll see. That little exchange you quote comes across as some sort of in-joke? Probably just me being optimistic for once.



You might well be right but this Ray tracing talk gives me similar feels as 8GB GDDR5 RAM in 2011/12. It was "crazy talk" and never going to happen. Then it did.

Never say never again...
RAM is different. You can increase that relatively late in the game if your bus and/or chip size selection allows. If there’s RT hardware in next gen, it was decided a while ago.
 
RAM is different. You can increase that relatively late in the game if your bus and/or chip size selection allows. If there’s RT hardware in next gen, it was decided a while ago.

That is beside my point. My point was that in 2011/12 most forum users here and GAF etc were saying 8GB RAM was a pipe dream and now are saying similar with RT (and as you say some devs too). That is all.

Well no! 8 years ago they didn't! SD games at :love:0 fps and all noisy? That'd be like asking for 3D rasterisation with the technology of 1982 in a home computer - it'd be completely underpowered and misplaced. Looking forwards, raytracing makes sense increasingly so for both raysterising and, potentially, purely raytraced visuals, plus ray-traced other stuff (AI, audio, things).

I took what Mark Cerny said as if the RT tech was fully available for a late 2013 console they would still have wanted simple and straightforward?....but seems a huge and unfair jump for devs to not want exotic anything 5 years ago and now going by a few dev Tweets it is the next big thing all within 1 gen! Will it mean if no hardware RT is in next-gen chips devs will complain all gen like with previous gens low RAM?

The good news is it seems the tech has been being worked on for a good while under NDA so hopefully that bodes well for it somehow being in the next-gen systems. Hope so anyway as the complaints all gen will get boring fast....
 
That's not crazy. That's expected for GDDR6. The only question is how wide the memory bus is going to be.

At the clockspeeds that Nvidia are running the GDDR6 on the Geforce and Quadro RTX cards:
  • 256-bit gets you 448 GB/s for 8GB of memory @ 1 GB per chip.
  • 352-bit get's you 616 GB/s for 11GB of memory @ 1 GB per chip.
  • 384-bit gets you 672 GB/s for 24GB of memory @ 2 GB per chip.
12 - 1GB chips on a 384-bit bus seems like a plausible configuration to me for PS5/Scarlet.
384 would be great, but I'm wondering why there's never been a mid-range gpu with a 384 gddr bus. There must be a large inherent cost associated? Impact on yield?

gddr6 clocks should rise by one bin step every two years or so. Meaning 16gbps might be a low cost bin in 2020. We have seen this trend with most external memories.

The whole range of nvidia is launching with 14gbps. So on the low end there might not be enough gddr6 volume that doesn't pass 14gbps to warrant a 12gbps bin for mid-range cards. On the high end, the nvidia controller might be unable to reach samsung's 16gbps, hopefully to be resolved on 7nm.
 
That is beside my point. My point was that in 2011/12 most forum users here and GAF etc were saying 8GB RAM was a pipe dream and now are saying similar with RT (and as you say some devs too). That is all.



I took what Mark Cerny said as if the RT tech was fully available for a late 2013 console they would still have wanted simple and straightforward?....but seems a huge and unfair jump for devs to not want exotic anything 5 years ago and now going by a few dev Tweets it is the next big thing all within 1 gen! Will it mean if no hardware RT is in next-gen chips devs will complain all gen like with previous gens low RAM?

The good news is it seems the tech has been being worked on for a good while under NDA so hopefully that bodes well for it somehow being in the next-gen systems. Hope so anyway as the complaints all gen will get boring fast....

One thing I think that is an important distinction is that RT hardware in 2013 would be assumed to be very specialized and not good at rasterization, incompatible with existing APIs and known optimizations, and overall difficult to developers.

Now RT is emerging as an enhancement to existing compute paths in GPUs, has developer and API support, etc. It should be seen as a very different proposition now.
 
384 would be great, but I'm wondering why there's never been a mid-range gpu with a 384 gddr bus. There must be a large inherent cost associated? Impact on yield?

gddr6 clocks should rise by one bin step every two years or so. Meaning 16gbps might be a low cost bin in 2020. We have seen this trend with most external memories.

The whole range of nvidia is launching with 14gbps. So on the low end there might not be enough gddr6 volume that doesn't pass 14gbps to warrant a 12gbps bin for mid-range cards. On the high end, the nvidia controller might be unable to reach samsung's 16gbps, hopefully to be resolved on 7nm.

If I understand correctly what I've seen stated on the subject, the size of the chip has some relation to what width bus it can (comfortably) accommodate. I expect a big chip, so I don't see a problem there. There is then the cost of the added silicon to the chip itself and the additional traces and space being taken up on the MB.

I won't try to guess which configuration will best hit the sweet spot of capacity, bandwidth and cost in 2 years, but 12GB of RAM and 672 GB/s of bandwidth seems like a good target to try to hit and a 384-bits interface to 12 -1GB GDDR6 chips is one way to get there.
 
That is beside my point. My point was that in 2011/12 most forum users here and GAF etc were saying 8GB RAM was a pipe dream
Because realistically it was right up until PS4 launched, but it was always an outside possibility because we knew the tech was coming - it was just a matter of timelines and Sony got lucky.

and now are saying similar with RT (and as you say some devs too). That is all.
RT hardware could be included, because it exists now. It's actually existed for years in the mobile space but for some reason no-one wants to talk about that. ;)

I took what Mark Cerny said as if the RT tech was fully available for a late 2013 console they would still have wanted simple and straightforward?....but seems a huge and unfair jump for devs to not want exotic anything 5 years ago and now going by a few dev Tweets it is the next big thing all within 1 gen!
Well, if the tech works, they'll want it, but at the same time we don't always get what we want. I think it's important to note the difference between wanting a thing in principle and wanting whatever particular implementation is available. I think many devs want full-on realtime raytracing as it solves a great many engine problems, but realistically, they may not want gimp rasterising hardware if RT tech is impotent in its first iteration. Hence devs could say they want RT and also say they don't want RT, depending on which unqualified type they are talking about.
 
If I understand correctly what I've seen stated on the subject, the size of the chip has some relation to what width bus it can (comfortably) accommodate. I expect a big chip, so I don't see a problem there. There is then the cost of the added silicon to the chip itself and the additional traces and space being taken up on the MB.

I won't try to guess which configuration will best hit the sweet spot of capacity, bandwidth and cost in 2 years, but 12GB of RAM and 672 GB/s of bandwidth seems like a good target to try to hit and a 384-bits interface to 12 -1GB GDDR6 chips is one way to get there.

The image in this Digital Foundry article comparing the PS4 Pro and Xbox One X SoCs does a decent, if not perfect, job of showing what the difference is on the chip side. Not perfect because there are other variances between the two designs that are affecting the die size.

And the image in this Extremetech article shows there's no physical way they were putting a 384-bit bus on the PS4 APU, even if it could take advantage of the BW and it made sense from a cost perspective.
 
The image in this Digital Foundry article comparing the PS4 Pro and Xbox One X SoCs does a decent, if not perfect, job of showing what the difference is on the chip side. Not perfect because there are other variances between the two designs that are affecting the die size.

And the image in this Extremetech article shows there's no physical way they were putting a 384-bit bus on the PS4 APU, even if it could take advantage of the BW and it made sense from a cost perspective.


I think such configuration could be possible if they put a 80CU GPU inside (72 activated).
It would be big enough to fit a 384-bit bus on the PS5.
PS4 OG has a 20CU GPU (18 activated)
Pro doubles that -> 40CU (36 activated)
PS5 could double that again? 80CU (72 activated)
We will see...
One advantage could be that Sony will lowly clock the GPU (let's say around the same frequency of XBX's GPU) and still obtain some decent flops (~11TF)
 
Last edited:
Hmm. The discussion in storage speed is interesting. If loading times can be reduced dramatically at the cost of graphics performance, the trade off may be worth it.

To give an example, if you’re heavy in MP games, most matches end in about 10 minutes. Then there is 1-2 minutes of matchmaking and another 1-3 minutes of loading depending on the game you’re playing. Over the course of the hour, hard drive loading speed can probably net you 1 additional round per hour. So if you’re getting 4 games per hour you might be able to get 5 Games now up to 7 if you are capable of pub stomping and bring matches down to 5 minutes.

Certain games where you die often and reloading commences will be much more bearable, Wolf:TNO was a terrible offender here, with its 3-4 minute load time after death.
 
Status
Not open for further replies.
Back
Top