Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
I can't see a multi-sku launch for either platform holder as it robs the marketing team of a simple "see this cool game? buy this box" message as they have to start adding "cool game visuals PS5Pro/XB2X only" text to videos. The advantage of the mid life refresh is that you get to bank the process and yield improvements to offer a faster box at comparable costs to the launch system, allowing you to sell another box to existing owners for a nice second round of sales to the existing base. That's why the fact that PS4Pro and XB1X (not as sure about X) don't outsell their predecessors isn't a big deal, they're largely sold to existing owners or to new customers who were turned off by the lower performance of XB1.

Ray Tracing seems like a strange luxury to go for, is it possible to blend RT into a regular raster engine, ie get the RT to calculate lights but build the rest of the scene as per normal? Or is using RT in an engine an all or nothing kind of thing? I can't help but recall the monster rigs necessary for the GDC Star Wars demo and can't see 1-2 years getting us a single chip solution for even a useful fraction of that power
 
Much of the ramping of cost below 20nm came through adoption of FinFET which had a host a challenges such a radical rethinking and design and chip layout. Looking at the process from a materials and physics perspective, 7nm should provide significant cost savings unless something unexpected happens. Thus far, nobody is reporting issues and if 7nm is in the next iPhone and/or iPad, those things are already being produced in serious volume.

And there's where you get cost savings, not re-designing chips on a 12/18 month cycle but making the exact same chip for a product that'll sell for many years, i.e. consoles.
Do you have sources by any chance? My Google fu is not yielding the results I want. All things I’ve read is pointing to higher design costs. Higher water costs. Per transistor is lower. But per mm^2 is higher.
 
I can't see a multi-sku launch for either platform holder as it robs the marketing team of a simple "see this cool game? buy this box" message as they have to start adding "cool game visuals PS5Pro/XB2X only" text to videos. The advantage of the mid life refresh is that you get to bank the process and yield improvements to offer a faster box at comparable costs to the launch system, allowing you to sell another box to existing owners for a nice second round of sales to the existing base. That's why the fact that PS4Pro and XB1X (not as sure about X) don't outsell their predecessors isn't a big deal, they're largely sold to existing owners or to new customers who were turned off by the lower performance of XB1.

Ray Tracing seems like a strange luxury to go for, is it possible to blend RT into a regular raster engine, ie get the RT to calculate lights but build the rest of the scene as per normal? Or is using RT in an engine an all or nothing kind of thing? I can't help but recall the monster rigs necessary for the GDC Star Wars demo and can't see 1-2 years getting us a single chip solution for even a useful fraction of that power
One of the interesting things about mid-gen refreshes, or the introduction of features in a shorter time frame is that it gives a longer run way to provide developers and platform designers to take advantage of the newer hardware earlier. It took 5 years to get to this point, where we started off the generation with very few compute shaders, and today we see something like TLOU2. Graphics have certainly come a long way, and the biggest change was the introduction of compute shaders IIRC which largely was announced back in DX11 back in 2007.

The move to 4K with the mid gen refreshes gives the devs opportunities to begin experimenting and playing with 4K today, in large that when it's time for next gen, day 1 we should see strong 4K and HDR performance.

So with regards to Ray Tracing, it would be good to introduce it now, but it wouldn't be mainstream until the next console.
DXR runs on the directX platform for the most part is flagged based. You check the system for a feature set and depending on the result you branch to that code. So it's very much like asking the engine which code it should run to do it's shadows and lighting for instance.

Metro The Last Exodus should do something exactly like this.
 
https://www.eurogamer.net/articles/...ory-will-there-be-more-than-one-next-gen-xbox
Next-gen: the challenges and the opportunities

We need not worry about the CPU side of the next-gen machines. AMD's Ryzen offers phenomenal performance for the amount of silicon area it uses, it's power efficient and it's broadly competitive with the best on the market - Intel's Core architecture. As we've discussed in the past, the potential here for deeper, richer, more complex games - or indeed more titles running at 60fps - is mouth-watering.

The issue is that while AMD can offer a generational leap over the graphics performance of the base PlayStation 4 (effectively the main target current-gen target platform), delivering the same increase in performance over PS4 Pro and especially Xbox One X is far more challenging. Doubling X's GPU power is achievable, but if the aim is also to double frame-rate, the ability to deliver richer graphics is therefore more limited. If the objective is to lock to native 4K and deliver hugely improved visuals, again, 2x Xbox One X performance isn't enough.

My best guess? The so-called 'FauxK' upscaling techniques seen today will be back, refined and improved for next-gen - be it through developers' ingenuity or via dedicated hardware. A 6x to 8x leap in GPU performance over the base PlayStation 4 can be delivered, but servicing a 4x leap in resolution doesn't leave a huge amount of remaining overhead for radically improved visuals. The jump from OG Xbox to Xbox 360 required a bleeding-edge GPU to deliver a 3x increase in pixel density with enough headroom for much improved graphics. Meanwhile, the leap from PS3 to PS4 saw only a 2.25x increase in pixel-count. In this respect, expecting native 4K across the board from next-gen doesn't seem likely.

More probable is innovative use of custom hardware. There are already rumours of Sony collaborating directly with AMD on its Navi architecture, and PS4 Pro offered some fascinating technology in allowing a console GPU to punch well above its weight in supporting 4K screens, even though third-party developer buy-in was relatively limited. Next-gen is a crossroads for game technology and this time around, we may well see alternative visions from Sony and Microsoft etched directly into the silicon, with each taking strategic bets on the future of graphics via the integration of custom hardware. If the make-up of the current-gen machines was defined by two vendors using very similar AMD technology, next-gen - by necessity - may be a little more interesting, and dare we say it, a little more exotic? And the idea of a new, more tightly focused series of consoles could add further spice.
 
Last edited:
Do you have sources by any chance? My Google fu is not yielding the results I want. All things I’ve read is pointing to higher design costs. Higher water costs. Per transistor is lower. But per mm^2 is higher.

Most of what I've read has been in Xplore and other IEEE publication may require membership. But this sums up the overall picture; cost is sunk once up front and cost per die drops over time as wafer costs drop - unless you have a process reliant on a particular material that runs short and their are no substitutes.
 
If you ask developers what they want, it’s a faster horse. Cerny revealed with PS4 that developers weren’t interested in dedicated RT hardware or other eccentricies. They just wanted a balanced design with unified, fast RAM.

Similarly, once the current discussions about RT and DXR spilled onto Twitter, they made it (one of whom being @repi) clear they were envisioning next next gen when they were talking about it.

Maybe some games will use raytracing but only for shadowing and ambient occlusion next generation.
 
I will truly be happy if it's a 15tf console, that's 8x increase over base PS4 which is quite substantial. You can even go native 4k all the time while messing with next gen effects, or 4k cbr and go all out with fluid dynamic sim, voxel GI, super dense game world and insane character model. Hope Navi delivers.
 
or 4k cbr and go all out with fluid dynamic sim, voxel GI, super dense game world and insane character model. Hope Navi delivers.
Are you sure that to get all those things isn't 1080p what's necessary, instead of 4K CBR? Because...

I will truly be happy if it's a 15tf console, that's 8x increase over base PS4 which is quite substantial.
IIRC, Sony said that the PS4 was 10 times more powerful than the PS3, but I don't see PS4 games looking 10 times better.
 
I will truly be happy if it's a 15tf console, that's 8x increase over base PS4 which is quite substantial. You can even go native 4k all the time while messing with next gen effects, or 4k cbr and go all out with fluid dynamic sim, voxel GI, super dense game world and insane character model. Hope Navi delivers.
I would love 15 TFlops but I feel 12 is going to be near the limit. Let's hope Navi delivers.
 
Are you sure that to get all those things isn't 1080p what's necessary, instead of 4K CBR? Because...


IIRC, Sony said that the PS4 was 10 times more powerful than the PS3, but I don't see PS4 games looking 10 times better.
I was just using that as an example, the point being with 4k cbr you can go so much further than native 4k, 1080p is ancient history and completely unacceptable for PS5 gen. PS4 is nowhere near 10 times PS3 in power, more like 5.5x.
I would love 15 TFlops but I feel 12 is going to be near the limit. Let's hope Navi delivers.
Money is on 14tf. We'll see.
 
Most of what I've read has been in Xplore and other IEEE publication may require membership. But this sums up the overall picture; cost is sunk once up front and cost per die drops over time as wafer costs drop - unless you have a process reliant on a particular material that runs short and their are no substitutes.
nice, it's good you still subscribe to paid content. I really feel like the internet is such a bad source of truth unless you're willing to pay for something. It's crazy how long I've gotten used to just absorbing internet and leveraging it as a trusted source at times, I really should be paying for things.
 
I was just using that as an example, the point being with 4k cbr you can go so much further than native 4k, 1080p is ancient history and completely unacceptable for PS5 gen.
Yes, I understand your point. I only wonder if next gen will be powerful enough to render things in a higher resolution AND still really push the boundaries to achieve a generational jump in graphics, etc. To me, that won't happen if we don't have, finally, realistic hair and clothing (no more "clay clothes" and modeled wrinkles, please! just simulate proper clothing, already), realistic vegetation, models where you can't see polygon edges, soft/area shadows, a truly dense world with better AI, etc.

PS4 is nowhere near 10 times PS3 in power, more like 5.5x.
Well, I just recalled what Sony said, but at any rate I don't see PS4 games looking 5 times better than PS3 games, either.
 
Yes, I understand your point. I only wonder if next gen will be powerful enough to render things in a higher resolution AND still really push the boundaries to achieve a generational jump in graphics, etc. To me, that won't happen if we don't have, finally, realistic hair and clothing (no more "clay clothes" and modeled wrinkles, please! just simulate proper clothing, already), realistic vegetation, models where you can't see polygon edges, soft/area shadows, a truly dense world with better AI, etc.


Well, I just recalled what Sony said, but at any rate I don't see PS4 games looking 5 times better than PS3 games, either.
Why are comparing things on objective and subjective scales?


For those expecting 14+ TF, that’s a lot. You need 72CUs at 1520 MHz to get there, as one example.
 
PS4 is nowhere near 10 times PS3 in power, more like 5.5x.
Well, I just recalled what Sony said, but at any rate I don't see PS4 games looking 5 times better than PS3 games, either.
.
Very silly conversation. Performance delta between PS3 and PS4 is based on what can be used in games, not just peak flops. PS4 could easily be 10x PS3 in terms of what devs actually use frame-to-frame as GPU is vastly more efficient.

Meanwhile, what does '5x better looking' mean? 'Better looking' isn't quantifiable. Going from 720p30 to 1080p60 is quantitatively 4 times better off the bat. If PS4 is ten times more powerful than PS3, then the games look ten times better because that's what the scale of performance actually gets you. If it doesn't look 'ten times better' to you, then your scale of what to expect is totally uncalibrated. Like loudness or brightness; twice as much energy only appears 10% more.

Thus, how much more powerful PS4 is than PS3 isn't realistically measurable, and the results on screen of that delta aren't realistically measurable, so people shouldn't really be throwing numbers around as if they are scientific metrics. ;)
 
Last edited:
^Yup, I think I've been a mess expressing what I was trying to say and the conversation ended up being a bit silly, indeed. Today I'm not very good at multitasking (I'm at work, now). :-S
 
Why are comparing things on objective and subjective scales?


For those expecting 14+ TF, that’s a lot. You need 72CUs at 1520 MHz to get there, as one example.

IIRC with a die size similar to the PS4/X1 on 7nm, we can reasonably expect an 8 core Zen and an RX580.

What impact would they have on clockspeeds though? Since 7nm allows you to clock higher with the same TDP but an APU necessitates lower clockspeeds, would the clockspeeds of an APU-bound RX580 be roughly those of the current 14nm RX580?
 
For those expecting 14+ TF, that’s a lot. You need 72CUs at 1520 MHz to get there, as one example.
This.

Not even counting the cost of disabled CUs as well.

So looking at a new memory controller for more memory and 72 CUs running at 1520 Mhz all into 360mm^2 max.
For reference X1X is 7 billion transistors / 360mm^2 / 40 CUs / 1172Mhz with 4 CUs disabled.
CPU is at 2.3 GHz.

Cooling is going to be a big question the higher the clocks go.
 
What impact would they have on clockspeeds though?
Yield mainly. The higher the clocks the less chips you'll have that can meet that requirement. If you have 10 chips per wafer, and you're losing several chips per wafer due to yield of not meeting those clockspeeds then the cost per chip skyrockets. Then the cooling requirements won't be cheap the higher you go.
 
Status
Not open for further replies.
Back
Top