NVIDIA discussion [2024]

  • Thread starter Deleted member 2197
  • Start date
Purely from a graphics perspective? Or from gameplay too?
Both.

If games are going to start including AI in eg. interactions with NPCs then will any two players have the same experience? Or if AI is used to generate plot-lines, or other "content"? I'm not just meaning what can currently be achieved with an RNG, but something that closely and semi-intelligently tracks the interactions with the player.
Interactions with AI NPCs also should fit that criteria. Games are created to tell stories or provide fun gameplay (sometimes both), in both cases the stories and the gameplay should be the same for all players.
Current approaches to putting genAI in games (chatbots as NPCs?) seem a dead end to me because that's not what gamers want from games in general.
That's not to say that there can't be other more interesting approaches.
 
That's not to say that there can't be other more interesting approaches.

Yeah I think you're right, but it was the more interesting approaches that I was thinking of.

Open-world games are notoriously either shite, or very difficult and/or expensive to do well. Because there's never enough time/money/people to do a truly good job, even on the budgets that Chris Roberts has had access to. So they rely on procedural generation, that sort of thing, then end up expensive and still shite. NPCs standing on a park bench in the rain at night.

Maybe AI could make a more coherent effort. But anyway I'm getting away from the graphics stuff so I'll hush now.
 
I think this is of course going to happen gradually. For example, back then many games have pre-programmed behaviors, so if you stand on a moving platform, you move with it because the programmer programed that. That's predictable. Some other games don't do that and if you don't walk with the platform you'll drop off.

However, after many games started to use physics simulation, many games started to have somewhat unpredicatable behvaiors because they are not programmed by the programmer, but happens "organically" because of the simulation. This created something unpredictable but on the other hand it's also more interesting. Recently I saw a Helldivers 2 footage where a huge bug being killed and sent flying in the air by the explosion, then it landed on and killed a player character. It's probably not the intention of the developers but it's actually part of the fun.

So I think it's quite possible that in the future games will not just be simulated in physics but also in NPC behavior. It'll be even more unpredictable but maybe it'll be more interesting. In this kind of games, maybe the developers will not be writing a story, but just setting up the environment for a story to happen, and each player will have their own different stories.
 
By definition, generative AI will lead to different experiences right? It's a paradigm shift that we will need to get used to. Just like people today don't expect to receive the same response from ChatGPT and it's perfectly fine.
 
Dimensity chips from MediaTek will have an NVIDIA GPUs and support RTX and DLSS3 Frame Generation inside car cockpits.

MediaTek specifically highlights that four automotive SOCs, Auto Cockpit CX-1, CY-1, CM-1 and CV-1, will be harnessing this new GPU IP and additionally support the NVIDIA Drive OS. The platform is expected to employ NVIDIA's "next-gen" RTX GPU architecture, which will be utilized for AI-focused & graphics-intensive apps on MediaTek Auto Cockpit, including a graphical interface filled with specific AI capabilities to aid drivers through an LLM assistant.

Dimensity Auto Cockpit takes in-cabin entertainment to the next level. It integrates an NVIDIA RTX GPU, which supports ray tracing for realistic visuals and lighting effects in games, plus AI upscaling and frame generation for fast, fluid action.


 
Last edited:
By definition, generative AI will lead to different experiences right? It's a paradigm shift that we will need to get used to. Just like people today don't expect to receive the same response from ChatGPT and it's perfectly fine.
I kinda expect the same response to my question from the same system though. This "paradigm shift" won't happen if the end result would be a destruction of what games are being loved for. I'm sure there are ways to implement even generative AI into games but switching predesigned NPCs to that is as I've said likely a dead end.
 
Maybe it is time for games to evolve to be more like "reality" and move away from starchy plot lines and end games. I would welcome the switch to more open, less scripted gaming adventures that could possibly stretch game play much further than we get today. You will naturally get gamers who prefer the traditional same end game result for everyone and imagine studios including a optional setting for that purpose.
 
Last edited by a moderator:
The first iteration of this won’t be anything material to gameplay. It’ll be used for NPC small talk or inconsequential behaviors. Those would already be a big improvement from the canned stuff.
 
This "paradigm shift" won't happen if the end result would be a destruction of what games are being loved for. I'm sure there are ways to implement even generative AI into games but switching predesigned NPCs to that is as I've said likely a dead end.
I'm not sure why you hold these fatalist opinions or why you find them "likely" but they only seem to be just that - opinions or preferences.

Edit: More choices/tools for game developers should by default be good, yet your posts assume for this case they are by default bad.
 
NVIDIA NIM and NeMo Retriever microservices let developers link AI models to their business data — including text, images, and visualizations, such as bar graphs, line plots, and pie charts — to generate highly accurate, contextually relevant responses. Developers using these microservices can deploy applications through NVIDIA AI Enterprise, which provides optimized runtimes for building, customizing, and deploying enterprise-grade LLMs. By leveraging NVIDIA microservices, Cloudera Machine Learning will enable customers to unleash the value of their enterprise data under Cloudera management by bringing high-performance AI workflows, AI platform software, and accelerated computing to the data – wherever it resides.
...
“Cloudera is integrating NVIDIA NIM and CUDA-X microservices to power Cloudera Machine Learning, helping customers turn AI hype into business reality,” said Priyank Patel, Vice President of AI/ML Products at Cloudera. “In addition to delivering powerful generative AI capabilities and performance to customers, this integration will empower enterprises to make more accurate and timely decisions while mitigating inaccuracies, hallucinations, and errors in predictions – all critical factors for navigating today’s data landscape.”
 
NVIDIA's first attempt to license its GPU IP to third parties dates back to the year 2013, when the company proposed to license its Kepler GPU IP and thus rival Arm and Imagination Technologies. An effort that, at the time, landed flat on its face. But over a decade later and a fresh effort at hand to license out some of NVIDIA's IP, and it seems NVIDIA has finally succeeded. Altogether, MediaTek's new Dimensity Auto Cockpit system-on-chips will rely on NVIDIA's GPU IP, Drive OS, and CUDA, setting a historical development for both companies.
...
MediaTek's family of next-generation Dimensity Auto Cockpit processors consists of four distinct system-on-chip, including CX-1 for range-topping vehicles, CY-1, CM-1, and CV-1 for entry-level cars. These are highly-integrated SoCs packing Armv9-A-based general-purpose CPU cores as well as NVIDIA's next-generation graphics processing unit IP. NVIDIA's GPU IP can run AI workloads for driver assistance as well as power infotainment system, as it fully supports such graphics technologies like real-time ray-tracing and DLSS 3 image upscaling.
...
Without a doubt, licensing graphics IP and platform IP to a third party marks a milestone for NVIDIA in general, as well as its automotive efforts in particular. Leveraging DriveOS and CUDA beyond NVIDIA's own hardware platform is a big deal for a business unit that NVIDIA has long considered poised for significant growth, but has faced stiff competition and a slow adoption rate thanks to conservative automakers. Meanwhile, what remains to be seen is how MediaTek's new Dimensity Auto Cockpit processors will stack up against NVIDIA's own previously announced Thor SoC and associated DRIVE Thor platform, which integrates a Blackwell-based GPU delivering 800 TFLOPS of 8-bit floating point AI performance.
 
NVIDIA announced today they have achieved more performance optimizations for their Tensor-LLM inference compiler, gaining (2x to 3x) performance vs the previous version.

They also announced that the new H200 achieves 45% more performance in Llama 2 vs H100.

 
Isn’t OpenAI trying to roll their own hardware?
That's at least several years away, and they are talking millions of GPUs, who is going to produce such large numbers to begin with? NVIDIA is guaranteed the majority chunk of this. Not to mention no software ecosystem is mature enough for such a large undertaking, except CUDA of course.
 
Back
Top