Baseless Next Generation Rumors with no Technical Merits [pre E3 2019] *spawn*

Status
Not open for further replies.
I think Sony can make a low power (low clocked) special IDLE state where you mine criptocurrency...

And then Sony would lose all unit sales to professional miners who don't buy a single game nor a single month of subscription services, turning the playstation division into a monumental money drain and killing off all software teams within a year.

Both the PS4 Pro and the XBoneX could have been great miners for their price and power consumption between late 2017 and 2018.
Let's all be thankful they were not.
 
The thing is who gave AMD more money to design the custom APU?

The 2/3 engineer thing could be bogus or it could be that Sony put a lot more money AMDs way and it kind of makes sense because the PlayStation brand is very important to Sony and it generates a lot more cash than the Xbox division does for Microsoft. So if you look at it like that it makes sense for Sony to have put more resources into the custom chips that AMD are making for them and how the 2/3 engineer thing is probable.
A million ways to slice that statement from Raja unfortunately. The project management triangle of time, cost, scope.
https://en.wikipedia.org/wiki/Project_management_triangle

If you scoped too big or you started too late, the only way to make it is to have a large
Investment in resources to make up the downfall. Thus more labour. If you start early and had your scoping and plan down early you know exactly how many resources you need and can proceed as planned.

Proper project planning results in money saved, it’s when projects go off the rail and more resources need to be thrown at the project to meet deadlines; or deadlines are shifted; or both.

A perfectly scoped project being delivered on time hits budget.

So as I can see it. Sony was 1 year late (no e3) and were hearing that more engineers had to be sent to Sony. So either they fell behind or had to rescope.

that’s how I look at it, and a lot of PMs will not tell you much differently. Having been on a lot of projects that are behind schedule I have seen way too many times how many more people are just pulled onto the project to hit deadlines; and the catastrophe of costs that follow as those people weren’t planned originally to come on board and do the work. I know it because I’ve been called into such a team; such a massive deal that 1/3 the company was being tasked to it; people just pulling people into the project. Crazy times. I didn’t enjoy the last minute flights. But it was cheaper for the company to fly me home and back than to keep paying for my hotel.

8 years later were still auditing the costs to deliver that project. We got paid a shit ton. Sadly I’m sure it costs more for us to deliver though.

TLDR; you can spend a metric ton more. Doesn’t mean you’re going to get a better product.
 
Last edited:
It's more a question of "what server applications could a console design be used for?" rather than "how does this design compare to other server architectures?"

And the answer to that is, not a lot than can't be much done better than existing Azure servers. :nope: If you're Microsoft and you have an existing versatile server infrastructure with really efficient cost/power ratio, why would switch to console-class CPUs? You're not saving meaningful power, you can't be saving much (if any) money with this weird design and you're limiting what the servers can be used for by the choice of CPU. :???:
 
And the answer to that is, not a lot than can't be much done better than existing Azure servers. :nope: If you're Microsoft and you have an existing versatile server infrastructure with really efficient cost/power ratio, why would switch to console-class CPUs? You're not saving meaningful power, you can't be saving much (if any) money with this weird design and you're limiting what the servers can be used for by the choice of CPU. :???:
They are using EPYC. So I think this dual purpose is really to extract a bit more from their hardware. I agree with you to not take it too far out of context.
 
They are using EPYC. So I think this dual purpose is really to extract a bit more from their hardware. I agree with you to not take it too far out of context.

It may make more sense when the next Xbox CPU is a known quantity but it difficult to see how a single design will be a good fit both embedded and server. Your server is either under-engineered or your embedded solution is over-engineered. Neither is desirable.
 
It may make more sense when the next Xbox CPU is a known quantity but it difficult to see how a single design will be a good fit both embedded and server. Your server is either under-engineered or your embedded solution is over-engineered. Neither is desirable.
Do you believe xcloud2 Scarlett will be run on standard servers or Scarlett hardware?

If its Scarlett based, are you then saying you can't think of any workloads that would be suitable apart from games to be run on it? Better to not be utilized when not game streaming?

Maybe console could work fine with smaller caches, but it wouldn't be detrimental to not be smaller if it meant dual use for the apu's.
There's many levels of performance profiles that can be provisioned in the cloud, you saying these couldn't fit any since it will be there anyway?

They wouldn't be replacing their servers, just making use of additional resources that they would have available.
 
Do you believe xcloud2 Scarlett will be run on standard servers or Scarlett hardware?

I'm not heavily invested in any of the rumours. I'm waiting for facts.

If its Scarlett based, are you then saying you can't think of any workloads that would be suitable apart from games to be run on it? Better to not be utilized when not game streaming?

I think you mis-read my post.
 
I'm not heavily invested in any of the rumours. I'm waiting for facts.
I'm asking what you think.
Based on your thoughts about compromises that you think will have to be made.
Forgetting what Phil actually said if you think it can be taken other ways.

Was talking about this post mainly
And the answer to that is, not a lot than can't be much done better than existing Azure servers. :nope: If you're Microsoft and you have an existing versatile server infrastructure with really efficient cost/power ratio, why would switch to console-class CPUs? You're not saving meaningful power, you can't be saving much (if any) money with this weird design and you're limiting what the servers can be used for by the choice of CPU. :???:
 
And the answer to that is, not a lot than can't be much done better than existing Azure servers. :nope: If you're Microsoft and you have an existing versatile server infrastructure with really efficient cost/power ratio, why would switch to console-class CPUs? You're not saving meaningful power, you can't be saving much (if any) money with this weird design and you're limiting what the servers can be used for by the choice of CPU. :???:
It's not a CPU but an APU with compute-capable GPU. It's actually a very capable PC being suggested. I wonder how it would fair running productivity apps remotely?

I'm not heavily invested in any of the rumours. I'm waiting for facts.
This 'rumour' comes from Phil Spencer at the Barclays Global Technology, Media, and Telecommunications Conference...

Yeah. If you go and you watch that video again, one of the things to take notice of is the silicon we're using to stream these games is actually the silicon from our console. And it turns out that consoles have very compatible kind of design criteria to what a blade in a server looks like...

The thing that's interesting for us as we roll forward is we're actually designing our next gen silicon in such a way that it works great for playing games in the cloud and also works very well for machine-learning and other non-entertainment workloads...


So the design as we move forward is done hand-in-hand with the Azure silicon team. And I think that creates a real competitive advantage.

Phil Spencer is telling us the next console is designed around Azure integration with an eye on "playing games in the cloud and also...machine-learning and other non-entertainment workloads"

So we shouldn't be comparing it to current server designs, but looking for what other workloads it'd be a good fit for, that MS feels it'll be a good fit for.
 
Last edited:
If you scoped too big or you started too late, the only way to make it is to have a large
Investment in resources to make up the downfall. Thus more labour. If you start early and had your scoping and plan down early you know exactly how many resources you need and can proceed as planned.

Proper project planning results in money saved, it’s when projects go off the rail and more resources need to be thrown at the project to meet deadlines; or deadlines are shifted; or both.

Exactly and I think that was intimated back when the two thirds thing came out. It does suggest (if that rumour was true) that Sony is paying a lot for the custom APU and that there's possibly a lot of customisation that could lead to things not working as planned and maybe why the latest negative rumours about desktop Navi might be accurate.

I doubt it that it's about them starting to late though and having to rush it. I think it's more about things not working as smoothly as planned.

That's basing a lot on that rumour though but until we have some solid Navi info we don't really have anything to go on at the moment.
 
It's not a CPU but an APU with compute-capable GPU. It's actually a very capable PC being suggested. I wonder how it would fair running productivity apps remotely?

Sure, but there is a gulf of difference between "capable PC" and server-class processor. Phil Spencer's quote makes it sound like they built a bunch of Azure blades with Xbox internals on. They doesn't make the CPU a server-class processor.

This is the only thing that really makes sense. Nobody wants to be building anything like Sony's PS2 and PS3 server racks in this day and age. :nope:
 
Exactly and I think that was intimated back when the two thirds thing came out. It does suggest (if that rumour was true) that Sony is paying a lot for the custom APU and that there's possibly a lot of customisation that could lead to things not working as planned and maybe why the latest negative rumours about desktop Navi might be accurate.

I doubt it that it's about them starting to late though and having to rush it. I think it's more about things not working as smoothly as planned.

That's basing a lot on that rumour though but until we have some solid Navi info we don't really have anything to go on at the moment.
As this is the baseless rumour thread ...
Wasn't there a rumor that things got delayed for wanting BC?
Can't remember if it was due to software or hardware but guess that's not mutually exclusive.
Maybe initially thought would do it in software but needed some customization in hardware in the end, or visa versa.

Could navi have some instructions removed that Polaris has? I could see that happening, but don't know for sure If it does in gpus.
Maybe normally not an issue as would be managed by driver, but maybe not so simple in the PS architecture?

Maybe all that work and involvement by Sony and AMD was less about future and more about past?

Regardless I fully expect that both Sony and MS to have invested a lot, can't see either not doing so one way or another and ending up with good devices.
 
I doubt it that it's about them starting to late though and having to rush it. I think it's more about things not working as smoothly as planned.

Perhaps BC issues from hardware changes and Sony's lack of Virtualization layers so they needed AMD to come up with a Hardware-level solutions?
 
Sure, but there is a gulf of difference between "capable PC" and server-class processor. Phil Spencer's quote makes it sound like they built a bunch of Azure blades with Xbox internals on. They doesn't make the CPU a server-class processor.
Do you suspect that the CPU's will be a lot worse than what they already have for example https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-general

Even if they are a lot worse, they would be tasked for different workloads anyway.
 
Sure, but there is a gulf of difference between "capable PC" and server-class processor. Phil Spencer's quote makes it sound like they built a bunch of Azure blades with Xbox internals on. They doesn't make the CPU a server-class processor.

This is the only thing that really makes sense. Nobody wants to be building anything like Sony's PS2 and PS3 server racks in this day and age. :nope:
We seem to be talking at cross purposes. Where's the 'server class processor' idea coming from? The description is console hardware put into racks for game streaming which will also be used for non-game cloud services, and as a result of that plan, the Azure team have been brought on board to make the console hardware better suited for that server-setup.

As we have been told XBN hardware is going into Azure servers and will be used for machine learning and non-gaming applications, isn't it salient to discuss what that hardware brings to the cloud? :-?
 
Sure, but there is a gulf of difference between "capable PC" and server-class processor. Phil Spencer's quote makes it sound like they built a bunch of Azure blades with Xbox internals on. They doesn't make the CPU a server-class processor.

This is the only thing that really makes sense. Nobody wants to be building anything like Sony's PS2 and PS3 server racks in this day and age. :nope:

I don't understand where you're coming from. Phil Spencer has said that next gen console silicon can have more than entertainment roles and that the Azure group is in on development and deployment.

Are you saying this isn't true? Or are you saying there isn't a role for a 10+ TFlop compute and machine learning GPUs in MS's lineup? Do you think Mi50 level processors can't have an application beyond games? And if so why are cloud providers buying them?

What exactly is it about a 10+ TFlop GCN evolution with tons on BW and an SSD and 8 cores per GPU ... that would prevent it running workloads designed originally for a 32 core Epyc CPU cluster and four Mi50 (or lower) GPUs?

Some specifics about the un-utility of four such APUs vs a 32 core Epyc with 4 7nm Mi50s would be really useful for those us without enterprise/cloud experience!
 
One of the advantages that AMD and others claim for heterogenous systems like APUs (AMD considers chiplets APUs) is the better density and energy efficieny [0], which is a nice fit for servers. Considering how many datacenters MS is running they should be very interested in this and I can see why they would want to use APUs.

Then there is a specific passage from AMDs exascale paper that sounds very much like it would be a nice fit for a console: "These system-level requirements imply that each node delivers greater than 10 teraflops with less than 200W."

In the paper the CU to CPU ratio is 8 which would also fit with a 8 core Zen 2 chiplet in combination with a single 64 CU GPU chiplet or two 32 CU GPU chiplets. The latter would be ideal for reuse but seems unlikely considering it would require techniques to make it invisible and present it as a single GPU to developers. On the other hand MS is a software company first and foremost and considering how much experience with graphics APIs they have, MS would probably be the best company to tackle it in combination with AMD (fast coherent interconnect, HSA etc.).

Like mentioned by other users previously the chiplet approach would allow MS to add components to the server blades that would not be as useful or just too expensive for consoles, like FPGA chiplets (e.g Project Brainwave).

Not to mention the reuse they would have with chiplets. But there is something which is bothering me: if you really want to maximize the reuse between your consoles and servers then you probably need a GPU which is not specialized to gaming but also has high double precision performance and machine learning instructions like Vega 20, so that you can use the blades to run scientific compute and machine learning tasks when xCloud servers have spare capacity.

At the same time this would be wasted space on consoles and using a dedicated gaming GPU chiplet and a server GPU chiplet would destroy the reuse and binning. On the other hand for Sony it was Navi, Navi and even more Navi rumors regarding the PS5 GPU, but for MS it's all over the place. Maybe they really use a modified Vega (since it seems more datacenter focused).

btw: another interesting usage for the APUs could be as an extremly fast storage server which would not require so much CPU cores and no double precision or ML. Microsofts Project Olympus supports up to 3 PCIe 16x cards plus up to 8 M.2 NVMe SSDs and in combination with GPU decompression and unified memory this should be really fast.

btw 2: I mention chiplets but this of course also applies to a monolithic APU. I mention chiplets only because I had some more paragraphs about them but deleted them in the end.

[0] https://www.computermachines.org/joe/publications/pdfs/hpca2017_exascale_apu.pdf

Sure, but there is a gulf of difference between "capable PC" and server-class processor. Phil Spencer's quote makes it sound like they built a bunch of Azure blades with Xbox internals on. They doesn't make the CPU a server-class processor.

[...]

Considering that AMD uses the same Zen dies for their desktop (Ryzen, Threadripper) and server (Epyc) CPUs, and will use the same Zen 2 chiplets for their desktop and server CPUs, what defines a server class processor for you? The only thing that seems to differ are the memory channels and support for RDIMM and LRDIMM as well as validation plus customer support.
 
Last edited by a moderator:
I think the main differences will be in board design, possibly memory also.
It wouldn't surprise me if the blade boards have high speed interconnects so that 2 or maybe all 4 boards can be stacked.
Unlike stadia's use, I don't think it would be made available for game streaming though.

I've also been giving chiplet / mcm apu some thought.
I think for ps5 monolith maybe the choice, even though it will also be used for game streaming.

But Scarlett has 2 consoles, game streaming and azure use. I wonder if they will embrace chiplets etc. Allowing slightly more customization and flexibility.
A lot of tech will be ready in time, but consoles are usually based on mature manufacturing.
In the paper the CU to CPU ratio is 8 which would also fit with a 8 core Zen 2 chiplet in combination with a single 64 CU GPU chiplet or two 32 CU GPU chiplets. The latter would be ideal for reuse but seems unlikely considering it would require techniques to make it invisible and present it as a single GPU to developers. On the other hand MS is a software company first and foremost and considering how much experience with graphics APIs they have, MS would probably be the best company to tackle it in combination with AMD
The biggest problem is in the pc space. Server and console not so much.
Servers probably already have to handle mgpu type set ups.
DirectX already has mgpu functionality.
It may not be invisible, but it may not be as big a deal in a static box. Patch unity, unreal, other engines to handle it in a basic way which may leave some performance on the table, but be nice fallback.
For MS the loss in performance may be worth it for the overall benefits.
Just waiting to be told that I'm 100% wrong.
if you really want to maximize the reuse between your consoles and servers then you probably need a GPU which is not specialized to gaming but also has high double precision performance
Is the GPU's actually very different or is it just disabled?
I'm wondering how much actual die saving is there if you don't have double precision?
Edit :
It seems that AI and ML tend to prioritize lower precision not higher, which is one of the use cases Phil gave. So maybe not having FP64 may not be a big deal for the intended work loads.
 
Last edited:
As random as the pastebin rumor is for PS5, here's yet another fact matched in his leak. New Assassin's Creed in 2020.
https://www.futuregamereleases.com/2019/05/assassins-creed-kingdom-ragnarok-release-date/

https://pastebin.com/PY9vaTsR
So far he's got the following right
14wjdq.jpg

He might got the GTA 6 right too according to recent rumblings. 14 TF kit might be the earliest version and that 12.9 TF one could be the latest iteration and more representative of retail.
 
If you look at the things his got correct it could all of been educated guesses but yeah it's starting to look like it might be accurate. Some of the stuff like GTAVI being one month exclusive sounds insane though.

Also 14TFlop GPU sounds like a stretch to me in fact I'll be impressed if they hit 12 TFlops. I'm expecting just over 10 TFlops.

I suppose the Ram and high TFlop GPU could just be a dev kit thing and not really representative of final retail hardware.
 
Status
Not open for further replies.
Back
Top