Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
At this point in time, those can't be issues for MS. They will no longer leave windows 10 (across all of MS).
Their drivers are mature, there is no DX13 coming. DXR is out now, Direct ML out in spring of next year. K/M support will be this month. High frame rate support up to 120fps is out now. Freesync is now. All the things they need in place, will be or are in place before launch.

The same could apply for Sony, but I'm not as familiar with them or their desires.

Interesting, I had completely forgotten about DirectML and Windows ML.

For people that don't know.

https://blogs.msdn.microsoft.com/directx/2018/03/19/gaming-with-windows-ml/

Potentially interesting stuff.

Regards,
SB
 
They both have two generations of complex console OSes under their belt now. The only justification for screwed up next-gen OS would be incompetence (that is, we wouldn't expect it but it can happen).
Not just that, but a lot of lessons learned from standby/rest-mode snafus. I would hope PS5 can do those things as intended day 1.
 
Interesting, I had completely forgotten about DirectML and Windows ML.

For people that don't know.

https://blogs.msdn.microsoft.com/directx/2018/03/19/gaming-with-windows-ml/

Potentially interesting stuff.

Regards,
SB
Imo the most important feature for next gen. More than RT, applicable in a variety of ways and is an enabler for high fidelity graphics or high fidelity other things. It’s where I believe the majority of innovation will be for next gen.

I truly believe this. RT without ML won’t fly. Neither will SVOGI. Neither are efficient enough. But with ML both is doable.

The more you can model and approximate with ML, not speaking strictly just for graphics,the more you can free up resources to do other things.
 
Last edited:
Imo the most important feature for next gen. More than RT, applicable in a variety of ways and is an enabler for high fidelity graphics or high fidelity other things. It’s where I believe the majority of innovation will be for next gen.

I truly believe this. RT without ML won’t fly. Neither will SVOGI. It’s just efficient enough. But with ML both is doable.

The more you can model and approximate with ML, not speaking strictly just for graphics,the more you can free up resources to do other things.
As an example:


Commentary:

www.physicsforests.com
 
Leadership back then and leadership now are two completely different things.

Whereas With Balmer and previous executives, wall garden and the such, Windows was a 0 sum game. They were more interested in locking in users than trying to expand their ecosystem. Xbox Needed to find a reason to exist and they attempted to shoe horn content and TV into the console. The cost of studios and exclusives were high, do they bet on 3P being able to sell the system. They failed.

And like VR the tech is good but it’s a solution to a problem that doesn’t exist.
Leadership and direction may have change, but I don't think their any better at interpreting the data.

Regarding the Kinect, I think the data was pointing toward a consumer demand that had been extinguished by the 360's Kinect. They sold something like 25 million of them and it's in the Guiness Book of Records for being the fastest selling consumer electronics device, so there was clearly a demand, but just like the fidget spinner, one people got it, they realized they didn't really want it. If MS had Gamestop's analytics they would have known that it's one of the more traded in items of it's generation, for the least amount possible
Was probably the wrong idea to bundle Kinnect, which was one decision. But their view that it wasn't being used, and no one wanted it because it's past it's time is systematic to my view of how they misunderstand the wealth of data they have.

How could it be used when they themselves didn't even support it, where was the games, even micky mouse ones?
Same about the Xbox media capabilities. We have no idea how it would have done because it did less as a media device than the 360 did so of cause it wasn't used a lot.
Kinnect is also more for the casual market, now that the console has reached casual price point its not being sold.

One of the biggest benefits going into next gen is that they won't need a whole new OS.
MS has some good ideas, but has a tendency to deliver it in a rough state, or not support it enough.
Then someone else comes along, does it better and MS supporters claim the world is against MS because no one used it when MS did it first.
 
Imo the most important feature for next gen. More than RT, applicable in a variety of ways and is an enabler for high fidelity graphics or high fidelity other things. It’s where I believe the majority of innovation will be for next gen.

I truly believe this. RT without ML won’t fly. Neither will SVOGI. It’s just efficient enough. But with ML both is doable.

The more you can model and approximate with ML, not speaking strictly just for graphics,the more you can free up resources to do other things.

What is it exactly and does this include pc's too?
 
f they had any corporate sense, they'd make TVs with the best gaming features like Freesync and HFR and have a value advantage.

Until recently (2014), Sony's TV division was losing lots of money, which is why Kaz Hirai made the TV division into a subsidiary.
I think the "Sony Visual Products" company can't really be in the business to offer "value" products, as it would be overwhelmed by the likes of TCL and Hisense.
Their high-end TVs have a very good reputation, and AFAIK it's that premium perception that made the company earn money again.
 
As an example:


Commentary:

www.physicsforests.com

Thanks for sharing.

What's that though? Works best with CUDA?

Are Tensor Cores superior to CUDA cores for a new type of better physics for games?

Would it be possible for say, AMD to have a new type of shader cores that have improved ML design in each of them (but individually weak) instead of Nvidia's 3 innard design of having Rasters, Tensors and RT Cores?
 
Until recently (2014), Sony's TV division was losing lots of money, which is why Kaz Hirai made the TV division into a subsidiary.
I think the "Sony Visual Products" company can't really be in the business to offer "value" products, as it would be overwhelmed by the likes of TCL and Hisense.
Their high-end TVs have a very good reputation, and AFAIK it's that premium perception that made the company earn money again.
By value, I mean offering a feature to gamers that increases the value of the TV; not budget pricing. Is Sony coordinate their PS and TVs, they can include HFR output even if it's not a feature of mainstream TVs and then offer a range of HFR TVs to sell to PS owners. I didn't call it a USP though as I expect other TVs to feature HFR.
 
  • Like
Reactions: Jay
I'd love it if AI could become ML, so games wouldn't have to write bespoke AI engines!

Would it be possible for Sony and MS for just them to use the more ML oriented machines in house to train their own specific AIs for different games then repackage onto patches for the console games that have little to no ML dedicated hardware?

I'm guessing it's would be more about having a lot of reaction for instances for the AI that's time consuming. Would this be more of a RAM and time expensive thing?

I can't remember where exactly I heard it many years ago but was mentioned that it's not hard to make competent AI just that the majority of consumers will get frustrated since the AI can actually destroy you in any game if it was programmed to? Then again devs probably want more error prone human type of AI for games?

On a more positive side, that DirectML seems interesting, an AI that scales based on the target difficulty/tailored to individual skills. Hope it's more sophisticated than scaling HP/locking enemy or ally moveset.

Sounds a little creepy if it's targeted fluctuating prices based on user's mental weakness (internet ordered goods). I'm going OT now haha my bad.
 
By value, I mean offering a feature to gamers that increases the value of the TV; not budget pricing. Is Sony coordinate their PS and TVs, they can include HFR output even if it's not a feature of mainstream TVs and then offer a range of HFR TVs to sell to PS owners. I didn't call it a USP though as I expect other TVs to feature HFR.
At one point I half expected Samsung to do that with MS for x1x, as they seemed to have marketing campaigns together, when the x1s was released.

The gaming market is only a subset, but their voice is heard a lot in reviews etc.
As you said it's USP's.
 
I'd love it if AI could become ML, so games wouldn't have to write bespoke AI engines!
I’m sure there just might be enough data out there. I mean with All the game companies boasting, after 1 month we have 1 billion hours played... there’s got to be something there that can be lifted.

Capture all the actions in all the RPG games. You should get a wide range of behaviours from ultra evil to bunny rabbit saver. Start applying them back to NPCs somehow, perhaps leveraging the dialogue choices players chose as training data as well.

Then, package them back into NPCs and go!
 
I’m sure there just might be enough data out there. I mean with All the game companies boasting, after 1 month we have 1 billion hours played... there’s got to be something there that can be lifted.

Capture all the actions in all the RPG games. You should get a wide range of behaviours from ultra evil to bunny rabbit saver. Start applying them back to NPCs somehow, perhaps leveraging the dialogue choices players chose as training data as well.

Then, package them back into NPCs and go!
If NPCs were coded as player characters, you could drop in AI to drive them as if they were player driven. That's what I've done in ionAXXIA, having the alien ships the same as the player ships with the same control interface, and the AI sits on top and uses virtual controls to fly and shoot. The problem I have is coding intelligent behaviours. I'm basically limited to modelling attitudes, like evasive or kamikaze, and then I can analyse the situation with a bunch of conditions to choose between behaviours, while casting loads of rays and circle-casts to get a sort of impression of the AI ship's surroundings. More conditions, like "their weapon range is greater than mine but they've just shot so their battery will be charging so I have a time window - am I fast enough to get in and out before they can fire again?" can be coded in a per-choice basis, or a bunch of weightings, but it's unwieldy and really hard to balance and debug. Training an AI player from all the other players would be the simplest and most effective solution to provide human-like decision making.
 
Last edited:
I'd love it if AI could become ML, so games wouldn't have to write bespoke AI engines!
Unity already has some of tha functionality. Even though ray tracing caught all the attention, the PICA PICA demo from EA featured ML-based AI as well.

Thanks for sharing.

What's that though? Works best with CUDA?

Are Tensor Cores superior to CUDA cores for a new type of better physics for games?

Would it be possible for say, AMD to have a new type of shader cores that have improved ML design in each of them (but individually weak) instead of Nvidia's 3 innard design of having Rasters, Tensors and RT Cores?
It's still experimental but yes, tensor cores would definitely benefit this ML-based physics engine. Don't see why AMD couldn't jump on the bandwagon too.
 
How does everyone feel about the "AdoredTV" youtube channel as a source? I ask because a bit back when watching one of his videos, something of possible relevance stuck in my mind. I am unsure which video it was, but basically it stated that Nvidia had to "jump the gun" so to speak when it came to the RTX (ray tracing) series. Why? Because AMD was far ahead already from a technical standpoint. By getting the early start Nvidia was going to be able to have more control over the narrative. (That last statement would be my observation if his position is correct and not one stated by AdoredTV.)

If AdoredTV is considered a poor source, my apologies. On the other hand, if he is a reliable observer, would that change anyone's view about the inclusion of ray tracing enhancements? (Using the term loosely to encompass both dedicated hardware or more general compute as options.)

I don't have time to start re-watching his videos so I cannot provide a direct link as yet.
 
How does everyone feel about the "AdoredTV" youtube channel as a source? I ask because a bit back when watching one of his videos, something of possible relevance stuck in my mind. I am unsure which video it was, but basically it stated that Nvidia had to "jump the gun" so to speak when it came to the RTX (ray tracing) series. Why? Because AMD was far ahead already from a technical standpoint. By getting the early start Nvidia was going to be able to have more control over the narrative. (That last statement would be my observation if his position is correct and not one stated by AdoredTV.)

If AdoredTV is considered a poor source, my apologies. On the other hand, if he is a reliable observer, would that change anyone's view about the inclusion of ray tracing enhancements? (Using the term loosely to encompass both dedicated hardware or more general compute as options.)

I don't have time to start re-watching his videos so I cannot provide a direct link as yet.
Let's say that it's best to leave YouTube channels out of your reliable sources list..
 
Status
Not open for further replies.
Back
Top