Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
Zen 2 chiplet with 8 cores is only 70mm^2. That leaves plenty of GPU space. Shaving tens of mm^2 to maybe fit a few more CUs at the expense of cores seems a fool’s game given it would complicate BC and likely have a tangible performance deficit compared to an 8 core CPU.
 
I wonder, if PS5 will have a dedicated WiFi chip for rock solid home Remote Play, what would it take to enable this tech to be also used for wireless sending of video feed to PSVR2?

They could kill 2 birds with one stone.

I am catching up on the thread, not sure if anybody answered this in a post I have not read yet.
But wifi is a very unreliable medium and retransmissions happens more often than you think on the lower layers ie wifi layer. For those reasons I do not think that wifi is a good carrier for the link between PSVR and the console, it's very hard to guarantee anything about delivery with wifi.
 
I am catching up on the thread, not sure if anybody answered this in a post I have not read yet.
But wifi is a very unreliable medium and retransmissions happens more often than you think on the lower layers ie wifi layer. For those reasons I do not think that wifi is a good carrier for the link between PSVR and the console, it's very hard to guarantee anything about delivery with wifi.
But is it also unreliable when it is within 10 feet and there is nothing occluding it? Can they not sanitize the duplicate packages? Or tune the signal or something?
 
I'm pretty certain the Wiiu used 5ghz WiFi for it's remote playback, albeit with custom chip from broadcomm.

Apparently it was actually quite good.
 
I guess because at the moment the CPU 'TF' is so low then the effect of moving these tasks over to the GPU has a much more positive effect on CPU than negative effect on the GPU?
Yes, I believe this. The command processor on the GPU will pick up a lot more heavy lifting as a result. The CUs might be used a little more, but that can be accomplished with async calls.
 
Zen 2 chiplet with 8 cores is only 70mm^2. That leaves plenty of GPU space. Shaving tens of mm^2 to maybe fit a few more CUs at the expense of cores seems a fool’s game given it would complicate BC and likely have a tangible performance deficit compared to an 8 core CPU.
Alright, let's try this again from another perspective.

$399, Zen 2, 7nm, Ray Tracing, 10+ TF, 16+ GB of memory, SSD, Chiplet design

Pick two, cause everything else is going to be compromised.
 
Its mid-range in performance for a 2018 released new architecture GPU. With the top end being at 14TF, the 2070 does about half that. No difference with PS4's 7870 level of performance for its time.
It's a high-end card. It's a massive chunk of silicon at high-end price and performance. If 2080 didn't release yet and only 2070 was out, you'd call it nVidia's high-end flagship card. 2070 is high end. 2080 is ultra-elite end, alongside the likes of Titan. Mid- and low-tier card will get their own chips later, will be priced a lot lower than 2070 and perform a lot worse, so that's yet another reason not to count 2070 as mid-gen unless your concept of 'middle' spans everything from $200 up to $600 and 2-3x performance difference...
 
I wonder if chiplets might allow twice the die area for twice the price, instead of the current trend of larger chips cost runaway above 300 or 400mm2. Yield would be per chiplet. So if a single 350mm2 is $100 on the BOM, perhaps chiplets would allow 700mm2 total die for $200.

I predict a 399 price point, but what I really want is a 499 next gen with 700mm2 worth of silicon.
 
There's no way a 445 mm^2 processor on the most advanced process currently available for large, high performance GPUs is "mid range". The only more advanced process is 7nm, and there's yet to be even a super high end enterprise GPU launch on that.

By comparison, the current console behemoth is the X1X at ~360 mm^2 on a slightly older process.
 
I wonder if chiplets might allow twice the die area for twice the price, instead of the current trend of larger chips cost runaway above 300 or 400mm2. Yield would be per chiplet. So if a single 350mm2 is $100 on the BOM, perhaps chiplets would allow 700mm2 total die for $200.

I predict a 399 price point, but what I really want is a 499 next gen with 700mm2 worth of silicon.

A separate CPU chiplet might also allow for a more cutting edge CPU, as months or years of integration time into a custom SoC could be reduced. If you can pick from the same stock of CPU chiplets as the PC server market your options increase.
 
If 2080 didn't release yet and only 2070 was out, you'd call it nVidia's high-end flagship card.

But 2080 is out, 2070 is on GTX 1080 level, high end from early 2016. Now in performance its a mid-ranger if compared to other 2018 products. IF PS4 would sport a 2070 it wouldnt really be a monster for late 2020 standards really, as a 2070 is a whole bunch slower then the real monster 2080Ti.
In fact it would be about in the same boat as the PS4 was with its 7870 level performance GPU.

2080 is ultra-elite end

I have never seen some-one stating it like that, its not even close. A 2080Ti is the high-end, and its a monster of a GPU really, even most reviewers calling it a monster, its much faster then a 2080.
Ultra elite whatever end would be the TItan V with its 12GB HBM2 memory.
 
I wonder if chiplets might allow twice the die area for twice the price, instead of the current trend of larger chips cost runaway above 300 or 400mm2. Yield would be per chiplet. So if a single 350mm2 is $100 on the BOM, perhaps chiplets would allow 700mm2 total die for $200.

I predict a 399 price point, but what I really want is a 499 next gen with 700mm2 worth of silicon.

Sounds like it would be a monster of a console...
 
A separate CPU chiplet might also allow for a more cutting edge CPU, as months or years of integration time into a custom SoC could be reduced. If you can pick from the same stock of CPU chiplets as the PC server market your options increase.

Yeah, at the very least using a CPU chiplet lets you take advantage of having the part come off the same line as all the other CPU products. You could get a good price scooping up product that didn't validate for server parts but hit console spec fine, and you immediately are able to take advantage of process improvements without needing to port the whole SoC.
 
But is it also unreliable when it is within 10 feet and there is nothing occluding it? Can they not sanitize the duplicate packages? Or tune the signal or something?

Yes, because we are talking about energi in such small quantities that it's actually bloody magic that it works at all. And wifi is resilient, so any idiot can setup it up and make it work, but it actually takes craft to make it work well all the time.

The problem is interference, if a "wifi frame" is destroyed or rather the receiver is unable to decode it for some reason, it will be retransmitted (this is below ethernet frame/ip packet level, so udp/tcp has no bearing on this) and that adds delay. Delay is bad for real-time stuff like VR. Now you will get less delay in transmission (maybe less re-transmits too, but then you have packet drops ie missing data and at best you get pixelation) if you tag the traffic as real-time. But then you also lose bandwidth, since you are not doing aggregation of traffic into "super frames". Ie you want bandwidth you get higher delay, you want resilience you get higher delay. You want less delay you loose the two others.
I am talking about wifi now, not some proprietary radio based solution, but then if its in the ISM band and most likely it must, Then you get a ton of other things competing for the same frequency/bandwidth/airtime. This is why 2.4GHz baby monitors and wifi HiFi devices like Sonos seriously mess up things up for wifi in 2.4GHz. The new LTE stuff that will use the 5GHz band will mess up 5GHz for wifi.
 
I only hope that we realy get a package without any bottlenecks this time.
Like no Tablet CPU and that hUMA also was not that impressive in the end when you find out that bandwith is not that impressive with only like 5Gb of RAM für Games.

I realy hope they think this through this time. I want a Machine capable of showing that consoles have an edge over similar PC components.
John Carmack stated in the beginning of this Gen that PS4 would be roughly twice as powerfull as a similar PC.

For some Reason he was not right this time. But the logic is correct.
No heavy DesktopOS
No multilayerd API like DX
to the metal coding instead.
Much more drawcalls can be handled

All together this MUST result in anEdge for console Hardware wich should be visible in those Digital Foundry Comparisions of Third Party Games in the future.

If the PS5 again is thwarted such a way that it looses to less good PC Hardware like ps4 did the whole Gen , then Cerny will need a saferoom ! :D
 
I only hope that we realy get a package without any bottlenecks this time.
Like no Tablet CPU and that hUMA also was not that impressive in the end when you find out that bandwith is not that impressive with only like 5Gb of RAM für Games.

I realy hope they think this through this time. I want a Machine capable of showing that consoles have an edge over similar PC components.
John Carmack stated in the beginning of this Gen that PS4 would be roughly twice as powerfull as a similar PC.

For some Reason he was not right this time. But the logic is correct.
No heavy DesktopOS
No multilayerd API like DX
to the metal coding instead.
Much more drawcalls can be handled

All together this MUST result in anEdge for console Hardware wich should be visible in those Digital Foundry Comparisions of Third Party Games in the future.

If the PS5 again is thwarted such a way that it looses to less good PC Hardware like ps4 did the whole Gen , then Cerny will need a saferoom ! :D


Less good PC? I never saw a PC with a Jaguar core and a 7850 do better than the PS4 or the Xbox One.
 
This also true - there is no comparable CPU in PC.
The best we can make is a comparison with an Athlon 5230 but this has only 4cores instad of the 8 of the PS4 Jaguar.

But anyway - there are way too much bottlenecks in this Gen...
 
Less good PC? I never saw a PC with a Jaguar core and a 7850 do better than the PS4 or the Xbox One.
agreed. This gen did really well. If it really ends with TLOU2 in tip top shape (is there a clip of non pro yet?), it really pushed the boundaries of the console.
 
Status
Not open for further replies.
Back
Top