Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
For console it can be a good idea to dont have too high demands and/or hopes. By the time they hit the toy stores it might dissapoint.
I'm sure next-gen consoles will be more than capable of achieving such effects but I'm also convinced developers will use all that power for something dumb like native 4K.
 
Ray-tracing requires an abundance of general compute/shading power, so I don't think it will require the absence of ray-tracing hardware to have sufficient general compute power for more physics simulation. If there's enough ALU for ray tracing, then there's enough for devs to do more physics simulations instead of ray tracing.

I'm not sure of what to think of the hardware requirements for ray-tracing as Nvidia has implemented it in Turing. You could compare the number of transistors to a Pascal, but I don't know what % of the transistors is for the tensor cores vs the ray-tracing hardware.
 
ut I don't know what % of the transistors is for the tensor cores vs the ray-tracing hardware.
The increased die size is because of the bloated features of the arc, they separated FP32 from INT32 units, supported FP16 RPM, added tracing cores and tensor cores, and those are probably even larger than the ones in Volta as they support INT8 and INT4, which Volta doesn't.
 
The increased die size is because of the bloated features of the arc, they separated FP32 from INT32 units, supported FP16 RPM, added tracing cores and tensor cores, and those are probably even larger than the ones in Volta as they support INT8 and INT4, which Volta doesn't.

What I'm getting at is people are worried about the percentage of die space ray-tracing hardware will take up, but it's hard to separate how much space Turing gave to ray-tracing BVH functions with all of the other additions like tensor cores.
 
https://jp.linkedin.com/in/tomohiro-tokoro-942a3457/ja

find on resetera linkedin account of Tomohiro Tokoro (Luminous 3D Character Model Lead Artist) directly talk about PS5. He deleted the things about PS5... :)
UvqCvpJ.png
 
I wonder if we can read anything into him just saying PS5 instead of next gen project? Like maybe they have PS5 Dev kits.
 
Which is definitely true. Maybe not APU low-end, but if it doesn't fit $200 and up graphics cards, then I agree it's not going anywhere.
Again: in games.
I tend to agree, and I would include APUs.
To give some context to this, as the RTX2080Ti was introduced, it had just been the 10 year anniversary since the introduction of the ATI 4870 graphics cards.
It was produced on 256 mm² worth of TSMC 55nm, had 1.2TFLOPS at stock 750MHz clock, and 115GB/s memory bandwidth and a hair under one billion transistors.
Ten years later, the RTX2080Ti is produced on 754 mm² worth of TSMC 12nm, has 13.5GLOPS at boost clock of 1524MHz, 616GB/s memory bandwidth and 18.6 billion transistors.

So in ten years, we have managed to increase FLOPs less than four times per mm², (and bandwidth hasn't doesn't even come close to keeping up with ALU capacity.) Ah, one might say - but todays GPUs do more than the GPUs a decade ago, comparing FLOPs alone is unfair! And that argument would have some merit (as would comparing bandwidths though for a less optimistic prognosis.). Ultimately, what determines your computational resources is the number of transistors. And that increased 6-7 times/mm² in the last decade. So, looking forward a decade to 2028/2029 where would we be? Well, insofar as a desktop GPU can be called mainstream at all in 2028/2029, lets give it 200mm² (because cost/mm² is rising), which is roughly a fourth of the RTX2080Ti. And if lithography advances at the same pace as the last decade (it won't) such a mainstream chip would offer us roughly the same FLOPS performance as todays RTX2080Ti, or up to 50% more transistors depending on your favourite unit of merit.

Only, of course, the likelyhood of lithographic advances over the next decade being as strong as during the last is very very low. It just won't happen. So the above is an extremely optimistic scenario. See you in a decade for cashing in the bets. ;-)

I can't help saying it again. Ultimately, the yardstick any new proposed methods of rendering is going to be measured by is efficiency. If efficiency isn't there, lithography won't bail you out. Those days are over.
 
And it does seem that we're at the start of the point where ray tracing is one such efficiency.

Going by AMD's comments on RTRT, I'm in a similar boat to iroboto - the likelihood of it appearing in the next generation is minimal. It also vastly diminishes my hopes of a two tier launch, because assuming there's no hardware RT at launch for the PS5/XB2 neither Sony nor Microsoft are going to want to pass up the chance at a ray tracing mid-gen iteration. Also, it's just such an easy path to a mid-gen: "Ray tracing plus a higher resolution!"

It might be a little deflating for the time being, but we're potentially in for some very interesting times with mid-gen PS5/XB2 and the Switch 2. Just imagine that: 3 or 4 years of RTRT development across the industry, and Nvidia manufacturing their 3rd or 4th generation of RTX hardware, scaling it down for portable awesome.
 
First signs of PS5 "existing" through developer CVs.

Square Enix is already working on a title for the PS5:
https://wccftech.com/playstation-5-aaa-square-enix/


Maybe Final Fantasy VII remake is a PS5 launch title. :runaway:
(EDIT: No, it shouldn't be FFVII because that one is being made on Unreal Engine and the dev house mentioned is Luminous Productions who use the Luminous Engine.. though with SQ-Enix acknowledging they'll split the game into several episodes the series is bound to creep into the PS5 at some point).


So in ten years, we have managed to increase FLOPs less than four times per mm², (and bandwidth hasn't doesn't even come close to keeping up with ALU capacity.)
While I do agree, we should take into account that 1 TFLOPs on Volta shader processors is worth a lot more effective throughput than 1 TFLOPs on a Terascale 1 VLIW5. Probably over 3x more.
Same goes for effective bandwidth.
 
and Nvidia manufacturing their 3rd or 4th generation of RTX hardware, scaling it down for portable awesome.
That's very unrealistic. This first gen is massive. It'll scale down notably to 7nm to something moderate and well beyond mobile. We'll then hit major lithographic issues meaning the shrink to cheap, low-end desktop part is going to be problematic. Getting it into mobile is unlikely without a refactoring of how it's implemented.

Curiously on the other side, as overlooked as ever, is PowerVR RT that was present in a mobile device in the first place. Really wish we had more on that to compare to!
 
That's very unrealistic. This first gen is massive. It'll scale down notably to 7nm to something moderate and well beyond mobile. We'll then hit major lithographic issues meaning the shrink to cheap, low-end desktop part is going to be problematic. Getting it into mobile is unlikely without a refactoring of how it's implemented.

Curiously on the other side, as overlooked as ever, is PowerVR RT that was present in a mobile device in the first place. Really wish we had more on that to compare to!
lol. the dichotomy.
I agree with the lack of node shrink. Architecture is going to be big driver very soon.
 
That's very unrealistic. This first gen is massive. It'll scale down notably to 7nm to something moderate and well beyond mobile. We'll then hit major lithographic issues meaning the shrink to cheap, low-end desktop part is going to be problematic. Getting it into mobile is unlikely without a refactoring of how it's implemented.

Curiously on the other side, as overlooked as ever, is PowerVR RT that was present in a mobile device in the first place. Really wish we had more on that to compare to!

It might be unrealistic, but it depends on how it develops. The PowerVR GPU you've cited is the reason I think a portable version could be possible a few years down the line.

Just a shrink of the RTX2070 though, would be preposterous.
 
So in ten years, we have managed to increase FLOPs less than four times per mm², (and bandwidth hasn't doesn't even come close to keeping up with ALU capacity.) Ah, one might say - but todays GPUs do more than the GPUs a decade ago, comparing FLOPs alone is unfair! And that argument would have some merit (as would comparing bandwidths though for a less optimistic prognosis.). Ultimately, what determines your computational resources is the number of transistors. And that increased 6-7 times/mm² in the last decade. So, looking forward a decade to 2028/2029 where would we be? Well, insofar as a desktop GPU can be called mainstream at all in 2028/2029, lets give it 200mm² (because cost/mm² is rising), which is roughly a fourth of the RTX2080Ti. And if lithography advances at the same pace as the last decade (it won't) such a mainstream chip would offer us roughly the same FLOPS performance as todays RTX2080Ti, or up to 50% more transistors depending on your favourite unit of merit.

Only, of course, the likelyhood of lithographic advances over the next decade being as strong as during the last is very very low. It just won't happen. So the above is an extremely optimistic scenario. See you in a decade for cashing in the bets. ;-)

Doesn't sound very optimistic to me :p and maybe not that accurate either imo. a 5nm chip should be able to put at least the flops into a 200mm2 die much sooner than a decade from now, 50% more transistors is a completely different ask, but should bring a clear uptick in performance as well, although power problems need solving. A 7nm Navi probably won't be too far of from those flops either and it's not projected to be a high end chip. The actual performance might be a different story though...

Apple just put around 10 billion transistors in it's latest A12X SOC in a very compact chip and seemingly quite impressive performance per watt figures. SOC is of course not the same as GPU and Apple is ahead of the curve it seems, but others can get closer and make good progress on efficiency as it becomes a must right about now. At any rate I'd be surprised if around a 200mm2 gaming focused chip in 2028/2029 wouldn't make short work of a 10 year old 2080Ti. I hope so anyway.
 
It might be unrealistic, but it depends on how it develops. The PowerVR GPU you've cited is the reason I think a portable version could be possible a few years down the line.

Just a shrink of the RTX2070 though, would be preposterous.
Ignoring consoles for a second, people should be aware that the possibility of MS next wave of exclusives will contain both Ray Tracing and AI-Up Rez. The latter being more profitable than the former, but together making a lot of sense.

The next Halo, Forza, and possibly Gears. All should have DXR enabled as MS now supports the PC base and this is something MS needs to do. There is no value in MS regulating the whole DirectML and DXR APIs if they only intend to piss it away.

We should expect to see DXR hit the console given MS' goals. The question is when.
 
Ignoring consoles for a second, people should be aware that the possibility of MS next wave of exclusives will contain both Ray Tracing and AI-Up Rez. The latter being more profitable than the former, but together making a lot of sense.

The next Halo, Forza, and possibly Gears. All should have DXR enabled as MS now supports the PC base and this is something MS needs to do. There is no value in MS regulating the whole DirectML and DXR APIs if they only intend to piss it away.

We should expect to see DXR hit the console given MS' goals. The question is when.

Not sure the presence of those APIs is as strong of an endorsement for their use in gaming as you think. DXR has utility in the Workstation market and DirectML has utility in the HPC market. Console & PC gaming aren't necessarily going to make or break the success of those APIs.
 
It appears as though Sony is going to announce there will be no E3 keynote this year, which seems to confirm 2020.

This is coming from a reddit post saying so and several resetera administrators alluding to some big news today.
 
Not sure the presence of those APIs is as strong of an endorsement for their use in gaming as you think. DXR has utility in the Workstation market and DirectML has utility in the HPC market. Console & PC gaming aren't necessarily going to make or break the success of those APIs.
HPC market for DirectML? It's marketed fully towards gaming. Too low level for it to be useful in the standard workstation market.

DXR true, but then again, why not include it anyway? They need incentives and strong mind share, I can't see them doing this on more of the same.
 
It appears as though Sony is going to announce there will be no E3 keynote this year, which seems to confirm 2020.

This is coming from a reddit post saying so and several resetera administrators alluding to some big news today.
Would you be a pal and keep us updated? Resetera moves way too fast on these types of day to keep updated.
 
HPC market for DirectML? It's marketed fully towards gaming. Too low level for it to be useful in the standard workstation market.

DXR true, but then again, why not include it anyway? They need incentives and strong mind share, I can't see them doing this on more of the same.

You are correct here. I was conflating DirectML with WinML.

Also, playing a bit of Devil's Advocate. I am actually leaning towards there being dedicated hardware for each of these in the next MS console, though I don't consider it certain.
 
Status
Not open for further replies.
Back
Top