Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
The XBSX will never throttle because of heat.
And neither will the SSD in it. The number provided are guaranteed performance under all loads and heat. Reports have NVMe throttle significantly to protect the chip when going north of 60C. In the non throttling days, Excessive heat >60 and we have reports of NVMe not lasting longer than 20 months.

The numbers provided by MS were boring but they were guaranteed numbers of performance of what to expect and how to maximize the console for developers.

The 2.4gb guarantee under all conditions will be fast as a fixed number. And to support that for the external drive as well is worth mentioning.
 
Last edited:
AMD's used SRAM as a marketing point before, when it launched Vega. The presentation included the claim that Vega has 45MB of SRAM on-die, and we never figured out just where all of it came from. The L1 and L2 caches, vector register files, estimated ROP cache and vertex parameter cache capacities were still far from explaining the amount. There's likely a sea of pipeline buffers, internal registers, and internal processors adding to that total.
Microsoft could be doing what AMD did and counted every possible SRAM macro on the chip.
Vega didn't have 8 CPU cores and their attendant L3s, which would be very large single contributors to the total for the console.


Cerny mentioned it can be more difficult to fill the larger number of CUs with work in a case where there was a lot of small geometry.
There was a presentation from AMD for GDC2018 where one topic of CU count versus SE count arose, and larger GPUs with more CUs per SE would do better with longer-running waves. There's a bottleneck in terms of how quickly the shader engine can launch waves in GCN, one wave every 4 cycles. In a GCN GPU with 16 CUs per SE, that's 64 cycles to get one wave per CU, and GCN needs at least 4 each to populate every SIMD, so 256 cycles before the last SIMD in the SE has a wave launched. Then, there's usually a preference of at least 2-4 wavefronts per SIMD so that there can be latency hiding, so we can see that fill time can take a lot of cycles.
If the workload has a lot of simple or short-lived shaders, SIMDs could have sub-optimal occupancy because those waves may terminate long before the SE gets back to them.
Smaller GPUs would be able to reach peak occupancy faster, and while they lack the sustained grunt of the larger GPUs, they would have fewer resources idling when dealing with short shaders.

There was a shift in hardware distribution from shader engine to shader array in Navi, but not much detail on the reason or what didn't transfer. The pressure would be more acute if for some reason the SE:CU ratio was still important rather than SE:SA.
Since the rasterizers and primitive units migrated to the SA, it would seem like the launch function would move with them, but there are other things AMD didn't distribute to the SAs. Not much attention was paid to this change, or detail given about what had really changed.


Sony would be testing to ensure the drive wouldn't be obviously non-performant, and would check the physical dimensions of the drive and any attached heatsink/fan.
Given how many M.2 SSDs use the same configuration under a random assortment of metal and fans, I would expect some rejects would work fine if their heatsink were removed and the drive could rely on whatever measure the PS5 has for cooling bare drives (if it does). However, I suppose it would be irresponsible for Cerny to comment that gamers could just tear their drives apart before installing.


The Xbox One had thermal failsafe modes where it didn't immediately shut down. If there's an anomalous environmental condition, the console isn't obligated to follow its normal operation guarantees.
https://www.eurogamer.net/articles/2013-08-14-xbox-one-built-to-automatically-tackle-overheating


AMD's TDP is a somewhat opaque measure of heatsink thermal transfer capability, based on a number of inputs that AMD tweaks at its convenience.
If it were truly a measure of heat, then it would also be a measure of power consumption, because physically they are almost completely the same outside of trace conversions to other forms of energy.
Out of curiosity. These trace conversions to other forms of energy are what kinds of energy?
 
4.0 vs 3.0 doesn’t apply to NVME only to PCI-E. At the speeds Microsoft uses there is no need for them to support 4.0 unless they want to make faster ‘memory cards’ in the future.

We don't even know what speed MS's implementation has in practice or could achieve. They might even limit the speed by the OS intentionally.
 
On the other hand it is very hard to compare what Sony has been doing to improve SSD performance for games to anything in the PC space it seems (have been trying to do some research).

DirectStorage?

DirectStorage is an all new I/O system designed specifically for gaming to unleash the full performance of the SSD and hardware decompression. It is one of the components that comprise the Xbox Velocity Architecture. Modern games perform asset streaming in the background to continuously load the next parts of the world while you play, and DirectStorage can reduce the CPU overhead for these I/O operations from multiple cores to taking just a small fraction of a single core; thereby freeing considerable CPU power for the game to spend on areas like better physics or more NPCs in a scene. This newest member of the DirectX family is being introduced with Xbox Series X and we plan to bring it to Windows as well.
 
Last edited:
Console vs. console? You always have the option to go wider. Simply allow the design to draw more power and provide a better cooling system.
They can always make a 999 console if they want the power crown with 24TF. They can always throw money at a problem, but the goal of a console design is to get the best performance from a target BOM. It creates hard limits specially in thermal density.

Even the 205W limit of xeon sockets mean they have to throttle AVX significantly beyond a number of cores. And that's just the CPU without any GPU attached. Is AMD that amazing with AVX? No idea. Do consoles have a 2-cycles AVX256? No idea either.

XBSX have a PSU estimated around 235W peak from a 315W design, and you need to substract:
psu and vrms efficiency (-8% psu, -3% dc-dc vrms)
All usb ports (7.5W x 3, unless they're not 1.5A charging current)
Onboard fast nvme (8W?)
Expansion fast nvme (8W?)
ODD (5W?)
Memory (2W per chip, 20W)

So the main SoC would be limited to 150W. Put 100W worst case peak in the GPU and 50W peak in the CPU, it's difficult to imagine an AVX256 every cycle at 3.6ghz (would require hyperthreading to fill the ALUs that much).

For PS5, it will be interesting to see what their SoC wattage limit is, we can't predict anything confidently without it.
 
I have really hard time believing that 499$ console that maybe price reduces to 349$ would ever sell near ps4 numbers, not even half of that. Price point will be super important or it has to be family of consoles i.e. keep old console around at cheaper pricepoint and go to mobile phone like business model.
 
$479 is the same as $399 last gen if you take inflation into account, so I wouldn’t be too sure ...
 
$479 is the same as $399 last gen if you take inflation into account, so I wouldn’t be too sure ...

It doesn't matter based on what we have seen in past/current gen. People don't want to pay much for gaming device. It's much more mental than being able to afford something. If ps5/xbox is way expensive most consumer will just go, well, I'll buy it when it hits 200$ and that never happens. and ps4 would keep selling more than ps5.
 
Console prices is similar if not same as why game prices cannot be raised to match inflation/increasing development costs. A lot of folks are just waiting for deals and it's mainly hardcore fans that buy at launch. And this has lead to all kinds of schemes like micro transactions as that is a way to trick people into spending more.

Maybe something like monthly fee to use console could work to offset hw price. Wait, what, that's already there :D
 
I don't understand why people are willing to pay 1000$ for phone but for consoles they want to wait years for magic cheap black Friday bundle at 199$. I would get much more fun out of cheaper phone and more expensive console. But I suppose one device is a cool status symbol showable outside home and seen as necessity and the other device is kind of geeky thing hidden behind tv.

And I guess the whole pay in monthly installments approach to selling phones is a way to make people not realise how much money they are really committing into phone so some people fall into that trap.

If sony did a 999$ console I would be first in line to preorder one, but I suspect the line would be very short.
 
I have really hard time believing that 499$ console that maybe price reduces to 349$ would ever sell near ps4 numbers, not even half of that. Price point will be super important or it has to be family of consoles i.e. keep old console around at cheaper pricepoint and go to mobile phone like business model.

Which console are you referring to? I see one $499 and one $399 console based on the reveals so far.

I have no idea why anyone is willing to pay $1000 for a phone.

My friends who have a $1000 phone have no idea why I paid $1600 for a bench multimeter.

What DMM cost that? You’re into decent o scope territory there.
 
In terms of 4K random speeds, I do not see why it would be faster than optane.

Been tempted to buy one of these Optane ssd's, a 1TB for about 300 dollars, used they can be had for around 180 dollars (if you get lucky, seen one at our marketplace). How does this work, only 64gb of this 1TB is actually the insane blazing fast portion, used as a cache for the bigger share of the drive? Any disadvantages to a standard NVMe that has the whole 1TB?

I was thinking if this Optane is so amazing, why its not in one of the consoles/or both, but prices are just insane, would be way to expensive.

Maybe you could do a video on SSD's, it's a rather hot-topic around forums, it's becoming a thing in both the pc and even console space, with MS taking the tech to PC also. Some deep dives on Optane, NVME, standard SATA etc, benches, history and thoughts :p It's probably a topic that stretches over multiple videos.


I don't understand why people are willing to pay 1000$ for phone but for consoles they want to wait years for magic cheap black Friday bundle at 199$.

Easy, a console is just for playing video games, nothing more, that's how 99% of the people see it, it's a toy. A phone is being used for..... everything, it's a 24/7 device practically.
Probably also the reason why some willingly pay so much more for a pc/laptop.
 
Out of curiosity. These trace conversions to other forms of energy are what kinds of energy?

I image other forms of EM radiation, maybe radio waves?

DirectStorage?

Maybe also BCPack at some point too.Moving forwards we're going to see greater than 8 cores, and even 8 core systems will move past consoles due to higher IPC and higher frequency.

Might be lots of opportunity to generate decompression threads to fit nicely around other workloads.
 
Console prices is similar if not same as why game prices cannot be raised to match inflation/increasing development costs.
Yep. Games have looked to alternative revenues streams rather than $80 game RRPs, and console companies are doing the same. And as subs are far more profitable, cheaper hardware with a singificant install base looks the best business strategy as seen from my arm-chair.
 
What DMM cost that? You’re into decent o scope territory there.
Modern 6.5 digit models from reputable brands are all in that price range. My scope is a old crap that's a LOT less expensive. EE students think the scope is where you have to put all your money, that's no longer the case with cheap chinese brands, there are really great (hackable) scopes for $400.

Yet my phone is always the "free" model with the cheapest contract.

I mean to say the important launch price target is how much the majority of people are willing to pay for a gaming console, not the forum posters who obviously are much more passionate than average. There might be 10 million of gamers willing to pay $600 USD, but not 100M+. We have data with mid-gen refresh: the vast majority didn't think 2.3x or 4.5x more power was worth $100 more, and that was 2 and 3 years ago. Inflation didn't make it a more acceptable expense, neither did the teraflops. Launch year might get away with 499 for early adopters, but it will need a price drop the next year or two, and another one with a slim. Engineering for cost is about the entire generation, and predicting the memory market, and 5/3/2nm is going to be difficult and risky.
 
Engineering for cost is about the entire generation, and predicting the memory market, and 5/3/2nm is going to be difficult and risky.
This is something that determines the best engineering choice years after release. Let's say for example PS5 and XBSX both launch at $500. XBSX is more power and sees a considerable sales boost over XB1. Let's say it's 1:1 with the PS brand helping prop up PS5 sales against the performance deficit. At that point, all the arguments are 'MS made the better engineering choice, Sony totally screwed up, and everyone's thinking XBSX will end up selling more than PS5 as the brand grows from strength to strength." But then if over the next five years, Sony manage to cost reduce down to $200 while XBSX stays at $350, PS5 sales would pick up and by the end of the generation, Sony would win the sales prize. Or, more important to Sony, maybe MS take the sales crown by subsidising $150 on every unit wile Sony pockets tasty profits. Then the argument for best engineering choice is flipped on its head.

I have absolutely no idea if that's possible, but it illustrates claims of poor choices that we might make are really short-sighted. While I'm scratching my head over 36 CUs and super-fast clocks, do Sony have an inkling of where they need to be headed to have the best sell through of the entire platform that I'm oblivious to? We can't start to answer that until maybe halfway through next-gen with the first cost reductions and hardware refreshes.
 
Status
Not open for further replies.
Back
Top