Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
Tensor was something that popped into head as example. Possibly bad one.
It's about the concept that an accelerator may make it into the console because they've added it onto the server part.
I believe it's supposed to be console first, server second ;)
The main priority of the SoC going into the data centre is to serve streaming.
Unused capacity can be put towards enterprise applications, likely at a cheaper price point (less availability, slower returns on results)
Gaming has a very high period of rush hour, and you'll likely see an entire drop off of usage at certain periods over night in the morning etc and the silicon needs to be available to support everyone on their morning and go home commutes, breaks, lunch etc. Outside of those major peaks of usage, the silicon is then readily available for cheaper processing etc.

The point is to maximize the hardware in the data centre, not to offer premium data and console experiences with the same chip.
 
I don't see a reason not to use a Zen 2 chiplet, it's almost perfect for a console application considering the GDDR6 latency. The real question is how large are MS/Sony prepared to go on the GPU/memory controller die?
 
With the major publishers pushing heavily for software-as-service for their major IP, a gaming platform that is a console-as-service would have a value-add for those trying to get in on the monetization paradigm.
Despite the lucrative possibilities and somewhat low standards for the publisher's end of the "service" relationship, adding hooks for securely and effectively supporting microtransactions, measuring engagement, advertising, subscriptions, DLC, non-massive online (i.e. online-only single player, squad-based, ladder-based play, various ratios of spectator/asymmetric gameplay) is a barrier for smaller providers or smaller projects that might want to get into the gold rush.

A cloud-linked console platform can be positioned to allow almost any game to become a game-as-service, with API hooks and console/server services that can provide the infrastructure. As a major publisher and software developer, it can also position Microsoft as a provider of pre-existing frameworks or partner engines that can plug in transaction or online hooks, and also provide guidance on how to monitor their effectiveness and to tweak engagement and monetization strategies--as a service to the turnkey-software-as-service game developer.
That is, in a gold rush, the most money is made selling mining tools and supplies.

Several facets of the old console paradigm are changed, such as extending the limited forays into installment or subscription-based console purchasing. Given the prevalence of massive day one patches or long streams of content/patch updates for long-lived games, the trend is to make many games not based on a service progressively unusable in their initial form (or their online hooks can sabotage the experience once the service is halted). Stopping the clock would begin to degrade the experience. The console in a console-as-service is a console that gamers effectively never stop paying for, which can shift the economics of the hardware on both sides of the connection.
Now that the hooks are in the local client, brand deals can shift dynamically--meaning more responsive value-add for advertisers. With the instrumentation of the platform now within reach of the advertiser, platform provider, and publisher, how effective these are can be measured, tweaked, and made more pervasive across many different consumption portals. Advertisement would not become out of date, interaction with sponsored content could be measured, and actions taken by the player in the personal space or home can be monitored and analyzed by the platform, publisher, and corporate clients paying for the data in real-time. Those various parties would also have that extensive cloud infrastructure to data mine and then store massive histories, which the platform holder can either collect rent on or utilize itself.

On one hand, this in-built service and monetization may provide something of floor in terms of long-term support of a product. If people still trickle in ad impressions or microtransactions, a dwindling playerbase may still pay enough for a small number of transient server instances wandering amongst the worldwide data centers. The other outcome is that the parties on the other side of the console connection will have more discretion on when they can encourage players to move on from a title, if it is perhaps not profitable anymore or at least not as profitable as something else they can compel a gamer to switch to. With proper hooks or containers, games can at least theoretically be updated into their sequels much like how Microsoft insists there will never be another version of Windows, and users will have as much say in the state of their game as modern Windows users do.
The software nudges can also progressively nudge players to buy new hardware by shifting their software experience enough such that new consoles are now the more premium experience.
 
I think your very wrong. What has share buttons on it ? The ps4 ?

The PS4, the Switch, the next Xbox.

The success of the whole endeavor is dependent on the quality and quantity of games available on the platforms and no matter how much MS invests in studios this cannot be determined by exclusives alone. Not designing around an SoC that will produce the best gaming experience possible first and foremost runs counter to this.

I agree, which is why Microsoft's stated plan to do something else should be concerning.

It's about the concept that an accelerator may make it into the console because they've added it onto the server part.

I think Tensor cores are exactly the kind of thing Spencer is alluding to.
 
The PS4, the Switch, the next Xbox.



I agree, which is why Microsoft's stated plan to do something else should be concerning.



I think Tensor cores are exactly the kind of thing Spencer is alluding to.
you're rushing to conclusions. You assume just because it's dual purpose it can't be doing it's job better than the traditional way consoles have been built.
 
Only if you assume that their statement indicates that their design efforts will result in a part that is deficient in performance for gaming. There is no reason to assume that this is the case.

There's every reason to assume this. It's what always happens when you try to serve two masters. It will either impact performance or it will impact price. The same mistake with Xbox One impacted both. MS has never come anywhere close to earning blind faith in matters like this. And, frankly, the way more concerning part is it demonstrated how tenuous their interest in gaming is. If sales are good enough, fast enough they are liable to start slashing investment again, starting the whole boom->bust first party cycle again.
 
There's every reason to assume this. It's what always happens when you try to serve two masters. It will either impact performance or it will impact price. The same mistake with Xbox One impacted both. MS has never come anywhere close to earning blind faith in matters like this. And, frankly, the way more concerning part is it demonstrated how tenuous their interest in gaming is. If sales are good enough, fast enough they are liable to start slashing investment again, starting the whole boom->bust first party cycle again.

Changing the paradigm, as laden with corporate buzzword fluff as it is, can mean changing what is traded off. Up-front price and performance can be preserved, since the goal is to have service subscriptions, transactions like in-game purchases and lootboxes, elimination of the relevance of the secondary market (can't sell the service), monitoring of user habits, preferences, and consumption can compensate. Gamers can be pushed more quickly to games publishers want, and game experiences can be tailored to get them to spend more money. The performance caveat is that depending on the regulatory regime of the user and data center, it's possible in places with poor infrastructure or ISP monopoly to have that portion of the performance equation penalized--unless for example Verizon or Comcast are allowed to snoop into the platform as well.

Microsoft and publishers can keep hardware costs down and performance up by keeping those the same in exchange for the control the user has over their console, game experience, wallet, and themselves.
 
There's every reason to assume this. It's what always happens when you try to serve two masters. It will either impact performance or it will impact price.

It doesn't have to. Not if there is significant overlap between the two sets of requirements (which there is in this case) or one is given precedence over the other which is likely based on the business model they are building around it.

The comparisons to Xbox One are not valid, either. Games were clearly de-prioritized by the management team that conceived of and executed that design. From the launch of the system and its "TV, TV, TV!" focus to the fact that instead of investing in games studios to support their new system they invested in a TV production studio. The moves they are making around this launch are totally and completely centered around gaming.
 
The PS4, the Switch, the next Xbox.



I agree, which is why Microsoft's stated plan to do something else should be concerning.



I think Tensor cores are exactly the kind of thing Spencer is alluding to.
The switch doesn't have a share button. It allows you to take screen shots in game and that's it. I have one next to me right now. Holding down that button will record video. On my Kinect xbox one I could just say record that and it would record .
 
The switch doesn't have a share button. It allows you to take screen shots in game and that's it. I have one next to me right now. Holding down that button will record video. On my Kinect xbox one I could just say record that and it would record .

I have a DualShock 4 next to me right now. If I press the share button, it'll take a screenshot, if I hold it, a menu will pop up and pressing square will save the last 15 minutes of gameplay.

They may not call it a share button, but from the sound of things, it is one.

It's a useful button, and one that I thought was a bit daft at its reveal, but I really dig it, it's really handy. I'd be surprised if Microsoft don't follow suit too.
 
At their current rate MS will most likely add a Mixer broadcast button. Hopefully it can be reprogrammed like the custom buttons on the chatpad.

Tommy McClain
 
At their current rate MS will most likely add a Mixer broadcast button. Hopefully it can be reprogrammed like the custom buttons on the chatpad.

Tommy McClain
I suspect that their purchase of mixer and there technology stack for low latency streaming may be in some way leveraged here. Add some additional silicon to help the service to reduce latency even lower for things like game streaming
 
Only if you assume that their statement indicates that their design efforts will result in a part that is deficient in performance for gaming. There is no reason to assume that this is the case.
you're rushing to conclusions. You assume just because it's dual purpose it can't be doing it's job better than the traditional way consoles have been built.
Dual purpose is not necessarilly an issue, but it sometimes do have some consequences on the gaming value. Either in price or performance.

I believe the PS3 was partly more expensive at launch because of the inclusion of Blu Ray and some other non gaming features. Also because of BR we got it one year late in the US and almost one year and a half late in EU without the performance benefits of a one year "newer" console compared to the 360. It was prohibitively more expensive too

The One also sacrificed some of its BOM on the inclusion of camera and seamless TV viewing, which also required some performance overhead while the console was more expensive than the PS4. In addition it makes someone assume that a lot of the R&D budget went to the TV and Camera aspects of the console.

Both of these examples suffered in the market and the gamer did not get value from the extra cost and/or delays for the gaming experience
 
The One also sacrificed some of its BOM on the inclusion of camera and seamless TV viewing, which also required some performance overhead while the console was more expensive than the PS4. In addition it makes someone assume that a lot of the R&D budget went to the TV and Camera aspects of the console.

They were/are the same price.
 
Dual purpose is not necessarilly an issue, but it sometimes do have some consequences on the gaming value. Either in price or performance.

I believe the PS3 was partly more expensive at launch because of the inclusion of Blu Ray and some other non gaming features. Also because of BR we got it one year late in the US and almost one year and a half late in EU without the performance benefits of a one year "newer" console compared to the 360. It was prohibitively more expensive too

The One also sacrificed some of its BOM on the inclusion of camera and seamless TV viewing, which also required some performance overhead while the console was more expensive than the PS4. In addition it makes someone assume that a lot of the R&D budget went to the TV and Camera aspects of the console.

Both of these examples suffered in the market and the gamer did not get value from the extra cost and/or delays for the gaming experience
There will always be an element of risk involved with pushing boundaries or attempting to change the game, such has always been the nature of the game. I have found that there have been a great deal of many failures from pushing the progression and advancement of things in any industry, and a lot of that failure can be attributed to a great deal of many different factors, and at times we saw winners, and we saw losers, but one thing that has always stayed true is that those who failed to innovate or change at all were guaranteed death.

It is often easier to point at the reasons why we should not do something vs why we should do something. There is great purpose in what MS is trying to achieve, and the purpose lines up with the rest of their business plans which is attempting to provide services at scale. Not only do they have experience in this area, but they have the infrastructure and the experience to make this happen.

I have no doubt that it is critical to look back on the past and see how things have failed, but fear of failure is not a reason to not pursue change, at least as a shareholder, if Balmer and team were still at the wheel, we'd see windows decline into nothingness as he was completely unwilling to adapt to the changing markets, and I would never have invested in a leader whose sole purpose was to set the company on autopilot and expend all their resources in trying to lock down their existing customers.

Having said that it's always easy to point out in the past where trying to innovate had failed them. But we seldom look at where innovation is largely what made them successful. Any company today can be disruptive, you no longer need to be a giant to have sway in the industry, the change of pace has only gotten faster, and stagnation only more dangerous.

2013 to 2018 there has been significant changes in the way things are done. MS uses their azure compute centres to simulate real game code on simulated chips before they are even baked, this is a dramatic difference from the way chips are designed today. We see that positive effects of that in the Xbox One X console, it's certainly performing exactly where they want it to be and some. The way computing in general is changing and rapidly, algorithms which used to be the answer for everything are now sharing their space with Machine Learning. Trained models are more complex and significantly more effective and faster with specific problems often where evaluation would take much too long to afford. The way silicon is made is slowing down; Moore's Law is no longer in applicable. Companies must find a way to extract more value out of their manufacturing process. And MS have found a way to circumvent one of the major challenges console manufacturers all face, which is that traditionally they must set a lower bar of performance for all their silicon to ensure they are getting effective yield from manufacturing. Thus why we see redundant CUs. But because MS needs not only to build 4x the number of chips for its cloud strategy, it can now effectively bin the best performing chips (no CU loss) and sell those as consoles, and put the lower yield chips into server banks. Even if MS and Sony came to exactly the same conclusions on hardware with Sony using the traditional method, MS would win performance by default because they'd be selling the higher binned version of it as the base model.

Aside from changes in the way business is done, MS has changed the way they work, vertical silos are no more, and the company is working across teams, providing a level of integration never seen before in the company.

But most importantly, is that MS has been much better at learning from their mistakes.

They communicate their plans earlier, hell their planning is much better and much longer term, they address concerns earlier, all of this before the actual product launches, a stark difference from their 2013 launch. They are not given the same blind faith their competitors are given. MS does not have the luxury of Apple in that they can just announce and launch their products the same day and the faithful will buy it en masse. I guess to a degree, with Sony openly stating that they are trying to emulate Apple in this regard, they are not given the same blind faith Sony fans have for their console of choice.

No, MS has to constantly prove it's value repeatedly, and their strategy of communicating their plans early gets discussion like these out earlier. It gives journalists time to absorb and think about what MS is trying to do. It gives the consumers/fans/haters/warriors their time to criticize. It lets them figure out their communication plan from the feedback. And by the time launch is happening, they've found a way to address many of the major concerns that gamers had and they go for it.

As I'v stated earlier, I'll heed the warnings. But we're supposed to be talking the technical of what can be achieved given what we know and ideally with as many realistic variables to account for (that at least we know of to account for). I find too often we are derailed by counter arguments that conveniently leave out details or past discussion in hopes to make their own arguments appear stronger.
 
I have a DualShock 4 next to me right now. If I press the share button, it'll take a screenshot, if I hold it, a menu will pop up and pressing square will save the last 15 minutes of gameplay.

They may not call it a share button, but from the sound of things, it is one.

It's a useful button, and one that I thought was a bit daft at its reveal, but I really dig it, it's really handy. I'd be surprised if Microsoft don't follow suit too.
I never said the ps4 doesn't have it. I said its the only one that does. My switch doesn't do that. Meanwhile tech from Kinect has been adopted into dozens of other products.
 
There will always be an element of risk involved with pushing boundaries or attempting to change the game, such has always been the nature of the game. I have found that there have been a great deal of many failures from pushing the progression and advancement of things in any industry, and a lot of that failure can be attributed to a great deal of many different factors, and at times we saw winners, and we saw losers, but one thing that has always stayed true is that those who failed to innovate or change at all were guaranteed death...
I dont think any of that really counters the argument that it may compromise the console experience.

Best case, the hardware designed for Azure servers is also ideal for consoles. If true, it needn't be specifically designed for servers - the perfect console design would naturally be a perfect fit for servers. Worst case, in designing a box that works ideally as a server box, it doesn't work well as a console box. Like designing a car base designed to be both an SUV and a sports car given a different shell.

I'm not saying that's a perfect analogy, as computing is different and it's possible that the computing structures used for servers are good for consoles too. I'm also not saying it's bad business for MS. The quality of the end results will show whether it's a comprimise for console owners who left playing second fiddle to the money mainstream of cloud services, or if MS do something clever and manage to bring a synergy across platforms that works to console owners' advantage.

The important point here in this hardware predicition thread is we have MS telling us they are considering Scarlet's HW use in servers as well as consoles. By considering what a server needs vresus a console, we can shift our expectations of Scarlet. Does that point to more CPU power than usual for a console? Is CPU totally out of fashion in servers? What about RAM and BW, typically significant components in servers - will Scarlet support wider busses and more RAM in different configurations to the console? Could that point to multiple hardware configs?

There's really plenty here to discuss of hardware merit instead of talking about business strategies and whether console gamers are being sidelined or not. ;)
 
If they are identical, still wouldn't having it used in the Cloud space mean greater efficiencies of scale to make the console hardware cheaper than would be possible otherwise?
 
Status
Not open for further replies.
Back
Top