Next Generation Hardware Speculation with a Technical Spin [pre E3 2019]

Status
Not open for further replies.
XboneX owners wouldn't be Lockhart's target segment anyways.
We don't know this because we don't know what MS's intention is for Lockhart is and we don't know it's specs. I'm an X owner, and a Pro owner, and I might be a Lockhart customer. The point is that Xbox One owners, the ones that own S and OG models, might not be upgrading to a system that's essentially a One X either. Why would they, especially if the used market or even new market would have X's available for less. There has to be a compelling feature to push people to upgrade, and traditionally that has been exclusive new software, and shiner graphics. It's a lot harder to convince gamers that they need a faster CPU if your screenshots look the same.
 
We don't know this because we don't know what MS's intention is for Lockhart is and we don't know it's specs. I'm an X owner, and a Pro owner, and I might be a Lockhart customer. The point is that Xbox One owners, the ones that own S and OG models, might not be upgrading to a system that's essentially a One X either. Why would they, especially if the used market or even new market would have X's available for less. There has to be a compelling feature to push people to upgrade, and traditionally that has been exclusive new software, and shiner graphics. It's a lot harder to convince gamers that they need a faster CPU if your screenshots look the same.
I second this.

I rather go a step further and introduce a concept that I don't think has been introduced yet since the discussions of 2 consoles began.
Let's revisit the traditional idea of what if MS axes Lockhart/Anaconda and goes with a single SKU.

They've not committed to anything official yet, so until up to the point where they actually decide to announce 2 separate xbox devices, it could be very well 1 device and we wouldn't know the better.

This is why I found people trying to both leak/guess MS specs, after much thought seems like a lesson in complete futility.
 
Last edited:
Put my thoughts together last week, didn't see a dedicated thread so following others. Predictions:

Lockhart
  • 5.4TF Navi (15% Raw improvement over polaris)
  • 12GB GDDR6
  • Zen 2 8c16t 2.8Ghz
  • 750GB SS (Could they really go 500-640GB?) Games sizes not increased as much due to non duplication and higher compression due to gpu decompression. Was in favour of tiered storage on the scorpio, but may just use some form of fast start while copying from external to SS when using external.
  • $300 & 350 ($300 is a discless version)
  • Runs 1X in compatibility mode, for XO, OG & X360
Getting to 5. 4TF shouldn't be a problem CU & Ghz wise so cooling and power delivery should be cheaper also.


Anaconda
  • 12TF Navi
  • 16GB GDDR6
  • Zen 2 8c16t 3.0Ghz
  • 1TB SS
  • $500
  • Runs 1X in compatability mode, maybe forced 16x AA, OG & 360 games 16* resolution boost instead of 9
Be nice if released an updated Xbox sdk to support Anaconda for 1X games, to allow simple doubling of resoultion for games releasing this year etc. So no post patching required for games without Scarlett dev box's etc.

PS5
  • 10TF Navi
  • 16GB GDDR6
  • Zen 2 8c16t 2.8Ghz
  • 1TB Big push on SS, as xbox 1P limited by pc so they have greater scope of how they can use it above and beyond simply loading and streaming assets faster.
  • $450
  • PS5 boost mode.
Not sure I think that Anaconda is worth $50 more in terms of mindshare so will be tough to compete, but lockhart is good value IMO.

For multi-plats 2TF difference will probably equate to nothing depending on how things like RTRT is implemented. So MS would need to fund some graphically impressive games.
They would also benefit from being lead platform, then down rezing to PS5 would be slight benefit (DF). When PS5 lead platform then games would just end up the same.

All monolithic, possibly Scarlett is MCM to allow cloud blades to change IO & memory controller to use HBM which would benefit running costs etc

All custom GPUs, and labeled as custom zen CPU but won't be much customization there at all.

To me this looks like a pretty reasonable guess.
Especially if MS introduce the High end first, cos early adopters will always pay more, get the title of "most powerful next gen console"
then release the Lockart version 6+ months later....
 
I second this.

I rather go a step further and introduce a concept that I don't think has been introduced yet since the discussions of 2 consoles began.
Let's revisit the traditional idea of what if MS axes Lockhart/Anaconda and goes with a single SKU.

They've not committed to anything official yet, so until up to the point where they actually decide to announce 2 separate xbox devices, it could be very well 1 device and we wouldn't know the better.

This is why I found people trying to both leak/guess MS specs, after much thought seems like a lesson in complete futility.
The only way i see it working is if Anaconda launches first. High price, high specs, high performance. And then Lockhart launches later as the affordable midrange, like PC graphics cards launch. Unless Anaconda launches 2 years after Lockhart and is the premium X style system, but I see little reason for them to develop both at the same time. Part of X's winning recipe was that Microsoft looked at the software available, analyzed what was needed to run those games at 4K, and tuned the hardware to accomplish that goal. They would be losing that insight if they design the hardware too early.
 
Why is PS5 at 10 Tflops when the most believable rumours (if there is such a thing) claim the devkit is at 13?
I've only seen romours that are worth discussing for sake of discussion and breaking down, doesn't mean I think they were true.

You'd have to point me to this romour your thinking of and explain why it's credible eg known insider with multiple confirming sources.

I have no idea about what TF they will be, maybe I'm low balling it, it's more a guide to how they relate to each other.
Although I think 10TF isn't bad, not when games are coded to take advantage of the newer functionality that will be included, RPM, mesh shading, VRS, super quick asset streaming, ID buffer 2.0, zen, etc

IMO, the only believable stuff came from Jason Schrier from Kotaku, who wrote that Sony aims to beat Stadia specs.
This wasn't a rumor, it was him putting out his own opinion, which he clarified.
And even then it just says aims, when it was known what Stadia was, the only reason someone would say aim is if its close. Otherwise be easy to say it's more powerful.
 
Last edited:
Why would they, especially if the used market or even new market would have X's available for less. There has to be a compelling feature to push people to upgrade, and traditionally that has been exclusive new software, and shiner graphics. It's a lot harder to convince gamers that they need a faster CPU if your screenshots look the same.
That's not a reason to question Lockhart and 2 tier approach.
That's the same question someone would be asking if it's worth buying PS5. Why not just buy PS4 Pro?
Any new generation needs reasons to upgrade and most of the time that will be the games and how different they are.
Although there is also QOL things to take into account, in the case of next gen loading times, etc

How will games look much better compared to 1X on PS5 or Anaconda?
The point is they won't massively, until games are coded for them, which then they will look better on Lockhart also.

Lockhart is more than 1X with a better/upclocked CPU, its a next gen console with the pros and cons that go with having to sell that to people.
 
Think I should have added a $450 discless Anaconda.
That would make things pretty interesting.
I'm guessing there's a growing amount of people who would rather save $50 than have an optical drive now.
 
Late followup on a few items:
This might be one way AMD could leverage GCN's architecture to satisfy some of the objections Sony's audio engineer had to using the GPU for audio purposes, back in 2013.
The diagram in the patent can be compared to the original GCN architecture diagram, where significant elements are in the same position and shape in both. What's stripped out of the proposed compute unit is most of the concurrent threads, SIMDs, LDS, and export bus.
What's left is a scalar unit that runs a persistent task scheduling program that reads messages over a new queue and matches the queue commands to a table of task programs, then starts them executing.
There's seemingly only one SIMD with a modified dual-issue ALU structure and a tiered register file. While there's no LDS, there's a different sort of sharing within the SIMD with a register access mechanism that allows for loading registers across "lanes" very easily, and a significant crossbar that can rearrange outputs in a programmed way. Some elements of the crossbar may be similar to the LDS, which automatically managed accesses between banks and handled conflicts.
The vector register file is not allocated like standard GCN. Besides the different tiers, the execution model sets aside a range of global registers, and per-task allocations that are created and released in a manner similar to standard shaders.

Once up and running, this CU would run a shader that essentially runs forever waiting to take host-generated messages directly, or read from a monitored address range. Rather than write to a standard GPU queue, have the command processor read it, engage a dispatch processor or the shader launch path, negotiate for a CU, go through the initialization process, set up parameters, wait for the CU to spin up, the host might be able to ping this custom unit with a series of writes or an interrupt. In the absence of CU reservation and real-time queues, GPUs can take tens of milliseconds, which Sony's audio group found unacceptable. Even with those measures, there's still a lot of the listed process that still has to happen to launch a shader on a reserved CU.

Other objections were the generally large minimum concurrency requirements, where a CU's multiple SIMD architecture required at least 4 (or at least 8 realistically) wavefronts before it could be reasonably expected to get good hardware utilization, and Sony's HSA presentation indicated a hoped-for flexible audio pipeline that wouldn't need the batching of hundreds of tasks. This stripped-down CU would remove the extra SIMDs Sony wouldn't want to batch for, although it's not clear if there's still 64-element wavefront granularity or if that could somehow be reduced. Just doing that and reducing the amount of concurrent wavefronts could save a decent amount of area, perhaps a third. More area could be saved if the texture and load/store units could be reduced in size for a workload that didn't need as many graphics functions. Some space savings could be lost with the more complex register file and ALU arrangement.

The queue method also provides a different and more direct way to get many low-latency tasks programmed into a software pipeline in a way that isn't as insulated as an API, while the task programs can still abstract away the particulars of CU.

This could be appealing for one or more custom Sony audio units, or more so than the existing GPU TrueAudio setup.

As for whether this could be relevant to Navi or a console, I did see one reference to shared VGPRs being added as a resource description for HSA kernels in changes added for GFX10.
https://github.com/llvm-mirror/llvm...3380939#diff-ad4812397731e1d4ff6992207b4d38fa
Although similar wording in a singe reference in many thousands of lines of code is slim evidence.


Wouldn't the chiplet design resolve the multi GPU rendering problem instead?
I mean.. Can the IO chip virtualise the GPU so that 2 gpu chiplet appear as 1 ? All the schedulding logic (and unified L2/L3cache) would be in the IO chip and the chiplet would just have the CU array and small L1 cache, same for the cpu chiplet with L1 cache...
In fact I wonder if the chiplet design would not also help to realise the full HSA vision with a fully unified memory pool.
One IO controller overmind chip to rule them all (cpu and gpu)..
Is it not a practical solution ?
One item to note is that the path between the unified last-level cache and the CUs supports at least several times the bandwidth of the memory bus, and the command processor, control logic, and export paths can all move values or signals amounting to hundreds to thousands of bytes per cycle to and from the CUs. Separating the CUs from their support infrastructure exposes all the on-die communications that had been considered internal to the die.
 
That's not a reason to question Lockhart and 2 tier approach.
That's the same question someone would be asking if it's worth buying PS5. Why not just buy PS4 Pro?
Any new generation needs reasons to upgrade and most of the time that will be the games and how different they are.
Although there is also QOL things to take into account, in the case of next gen loading times, etc

How will games look much better compared to 1X on PS5 or Anaconda?
The point is they won't massively, until games are coded for them, which then they will look better on Lockhart also.

Lockhart is more than 1X with a better/upclocked CPU, its a next gen console with the pros and cons that go with having to sell that to people.
Except in the case of PS4 owners upgrading to the hypothetical 10TF PS5, you have a marketing opportunity that goes something like "2.5x faster than Pro" and some wizbang screenshots showing a PS4 game running at twice the resolution, or with higher quality settings. The hypothetical Lockhart I was responding to was 5.4TF (less than One X), 12GB of RAM (same as One X), and a 750GB SSD (less storage than One X, but faster), and maybe not a disc drive. This would put it's graphics power in the neighborhood of One X, constrained by the same amount of memory, and only enhanced by a much faster CPU. It's much harder to market a "Next Generation" box that is the same or worse in paper specs. Furthermore, I know TF aren't all the same, but we are talking about related hardware here. AMD GPU's haven't really shown a performance increase per flop over the last few generations. If a game barely runs 10% slower on Lockhart than X that's a non-starter for many people. If it runs the same, it's a non-starter for many people. It has to be better across the board.

Also, if the next generation is going to be real 4K, I think 5.4TF is too little, regardless of any upgrades in CPU. The 6TF GPU in One X is the best part of the system, and it's not really enough for 4k most of the time. And if you want to increase graphics fidelity, you would need more memory and more GPU power, not the same or less. If Sony launches with 10TF, Microsoft will get destroyed if they launch a console with half the GPU power, unless it's half the price.
 
. If Sony launches with 10TF, Microsoft will get destroyed if they launch a console with half the GPU power, unless it's half the price.
More the reason why for MS, the debate should really be around Lockhart and not Anaconda. Discussing the value proposition of the base model that determines base performance and feature set, meant to sell the majority of SKUs, Lockhart is surprisingly quiet in terms of discussions and people only seem to care about Anaconda.

Grave misplacement of focus on us considering which SKU holds all the dice.
 
So your saying people are buying into the promise of better looking games until they come.
How is that different than buying Lockhart knowing you will get better games?
If someone wants to buy a previous generation console knowing that they will miss out on next gen games that's fine.
Games aren't going to look much better, not at the start compared to 1X on any console.

Games won't run slower on Lockhart, hence why I said 15% raw performance improvement, if it's less than 15% then it will be clocked higher to at least match. The graphics power in BC mode is at minimum the same as 1X, not when running next gen games though. TF is only a single part of what makes up a consoles power(even for gpu which may facilitate RTRT, and other graphical improvements like mesh shading etc), but everything else you glide over contributes to the system. CPU, SS storage, these makes a big impact on what is capable and the experience.
Normal consumers will not know or care how many TF Lockhart has.
The fact is Lockhart will be able to play games that 1X can't, and play current games as good with a better overall experience.

I don't see next gen about being the real 4k, whatever that may mean to people.
But what you will get is better graphics regardless if you buy a Lockhart or an Anaconda when games are coded for them compared to 1X.
I also never said Lockhart is aiming for 4k.

I've got a 1X and I'm considering a Lockhart, so I'm sure people with base Xbox ones will also, plus the list I gave of people I feel would be interested.

Mum's and dad's and people in general seem to be able to buy tv's, phones, fridges, headphones, you name it, which has multiple options yet for a console with 2 they loose the power to comprehend?
  • Entry option
  • Premium option
Either with or without an optical drive.

Consoles must be one of the very few electrical items that people will have trouble understanding entry and premium?

Models get replaced all the time in consumer goods. This is less often than 90% of them.

I'm not sure your not over thinking it to be honest?
 
More the reason why for MS, the debate should really be around Lockhart and not Anaconda.
In terms of marketing, it's always around the premium product, then they say From x amount.

The people following all this until release are generally interested in the premium product. Which has the higher Flops PS5 or Anaconda.
Here we know that there's a bit more to it than that.
 
Last edited:
https://www.resetera.com/threads/ps...nd-sonys-ssd-customisations-technical.118587/

This is crazy from SIE patent and it is possible to go much faster than PCIE 3 and PCIE 4 maybe 10 Gb/s for PS5 SSD... from 1Gb/s to 20Gb/s

They invented their own file system...

Interesting patents for offloading storage management to custom secondary hardware. With increased speeds, CPU utilization for managing file transfers can become noticeable.

If they went with this, that would indeed deserve the moniker of a "custom SSD".
 
"Additional CPU." That one really could be Cell. Should be a few mm² at 7 nm, wouldn't need to be programmed by anyone other than Sony, and could do PS3 BC.

My thinking here is how to get a Cell in there for PS3 BC. Though it is tiny, it still needs to be justified. Just as PS1's hardware did useful stuff for PS2, PS3's CPU could be beneficial for PS5 as a file processor. Considering the ARM in PS4 is too crap to enable background downloading, a tiny Cell much handle all that gubbins much better. And the value to the system, "plays all your old games," would be substantial in marketing terms.
 
"Additional CPU." That one really could be Cell. Should be a few mm² at 7 nm, wouldn't need to be programmed by anyone other than Sony, and could do PS3 BC.

My thinking here is how to get a Cell in there for PS3 BC. Though it is tiny, it still needs to be justified. Just as PS1's hardware did useful stuff for PS2, PS3's CPU could be beneficial for PS5 as a file processor. Considering the ARM in PS4 is too crap to enable background downloading, a tiny Cell much handle all that gubbins much better. And the value to the system, "plays all your old games," would be substantial in marketing terms.
Probably just an arm processor, like in the PS4 though, although much, much better.

Do they even make cell chips anymore? How is the tdp?

I was thinking that it would be a multipurpose processor where it can also PSVR's asynchronus reprojection right in the box, as well as assist background functions, but I'm not sure if something like that can do everything.
 
https://www.resetera.com/threads/ps...nd-sonys-ssd-customisations-technical.118587/

This is crazy from SIE patent and it is possible to go much faster than PCIE 3 and PCIE 4 maybe 10 Gb/s for PS5 SSD... from 1Gb/s to 20Gb/s

They invented their own file system...

From that patent:

A secondary CPU, a DMAC, and a hardware accelerator for decoding, tamper checking and decompression.
There it is. Custom hardware for faster decrypting and decompression, as we've been saying it would be needed if significantly lower loading times were to ever be achieved.
This is something that could take several years for gaming PCs to catch-up. Maybe it can be done through software if people have 16+ cores and massive amounts of RAM to use as storage scratchpad, but that would also require devs to make a massively parallel method for decompression/decryption.

"Additional CPU." That one really could be Cell. Should be a few mm² at 7 nm, wouldn't need to be programmed by anyone other than Sony, and could do PS3 BC.

My thinking here is how to get a Cell in there for PS3 BC. Though it is tiny, it still needs to be justified. Just as PS1's hardware did useful stuff for PS2, PS3's CPU could be beneficial for PS5 as a file processor. Considering the ARM in PS4 is too crap to enable background downloading, a tiny Cell much handle all that gubbins much better. And the value to the system, "plays all your old games," would be substantial in marketing terms.

AFAIK Cell was co-developed by IBM to be produced on their fabs. Can TSMC even produce a chip with an embedded Cell without entering into IP infrigement?
And if so, could they do it without an incredibly amount of man-months/years dedicated to significant re-engineering?
An 8-core 3.2GHz Zen 2 wouldn't be able to emulate the Cell? With two 256bit FMAs, each Zen 2 core has twice the theoretical floating point throughput of a SPE at ISO clocks (25.6 vs 51.2 GFLOPs at 3.2GHz), and this is obviously with much better utilization due to modern schedulers and much larger caches. Would Sony even need a Cell to emulate the PS3 at this point?

Using modern ARM cores would enable them to use them in standby mode with very low power utilization, whereas applying power-saving features to a Cell would again require massive engineering efforts.
AMD is used to embedding ARM cores into their CPUs/APUs, but they haven't done anything remotely similar with Cell.

And this is all assuming Sony would see a substantial market benefit in enabling PS3 BC into the PS5. What demand is there to play PS3 games that weren't already ported to the PS4?
 
As much as I'd love a perfect compatibility for my ps3 games, the demand for it probably can't justify the engineering effort of shrinking the cell down to 7nm. It's hundreds of millions invested for a few percent of gamers who would buy the ps5 anyway. The majority of must-have titles on ps3 have a remaster on ps4, which is profitable for sony and the public is generally receptive. It just sucks for the more obscure titles which will never get remastered, but again, those have no demand, and the few gamers who really care about old ps3 games still have their ps3.

With that said, they need a plan for PSNow and ps3 games, how would they scale access to this catalog of old games? Maybe the demand is so low that their current deployment is enough. Or server size x86 emulation is sufficient despite being power hungry.
 
Status
Not open for further replies.
Back
Top