Realtime AI content generation *spawn

What is Valve's position on photo-sourced art assets that aren't owned by the developer?
Max Payne 1 and 2 both use high resolution photographs taken on the streets of New York and Finland for the starting point for their textures. It's not like Remedy went to the "designers" of each brick pattern and acquired the rights to use that imagery in their games. Both the appearance of the bricks as well as the pattern laid by the mason can easily be considered art. How is that different from AI-assets that aren't owned by a developer?
 
What is Valve's position on photo-sourced art assets that aren't owned by the developer?
Max Payne 1 and 2 both use high resolution photographs taken on the streets of New York and Finland for the starting point for their textures. It's not like Remedy went to the "designers" of each brick pattern and acquired the rights to use that imagery in their games. Both the appearance of the bricks as well as the pattern laid by the mason can easily be considered art. How is that different from AI-assets that aren't owned by a developer?

Photographs are often considered copyrighted art. A building is not. :p Nor a street. Or a brick. Anyone can take a picture of a building (even your house), street, brick, etc. and it's fine. Not everyone can use the resulting photo or potentially an alteration of that photo without getting consent from the owner of the photo.

AI generated art uses copyrightable art as source material. That's where it starts to get legally tricky. Moreso, if the art resembles the art that it is trained on. Again, it's tricky.

It's legally fine for an artist to create art (like creating a character model) that looks similar to or is inspired by art from another artist, as long as it is not a blatant copy. That's to recognize the skill and effort of the artist who may have been inspired by another artist.

What about AI art? There's no artist skill or effort there. At best, it's using AI to create hundreds or thousands of versions of "something" and then going through them to pick out the ones that aren't horrific (in the case of AI generated human 2D art, /shudder some of them are nightmare inducing with how they do fingers or body parts).

So, if the AI art is deliberately or not deliberately very similar to the source art used in the AI training/algorithm, that presents a legal issue.

NOTE: this is WRT things classified as copyrightable. Having AI generate some random brick or building doesn't cause potential legal issues, there are no legal protections (in general) for the look of a brick or the look of a building. Having AI generate "art" OTOH now impinges on things that are copyrightable.

At issue on Steam is the recent flood of games (especially NSFW games) using AI generated art that is trained on copyrighted material and generated created to be as similar as possible to the copyrighted artist's work.

Regards,
SB
 
Last edited:
My company fired people as reduntant because they decided to use midjourney for asset creation to reduce costs ignoring the fact that this is copyright infringement, hoping that they will never get cought and nobody will notice.
Personally I find it disgusting.
 
My company fired people as reduntant because they decided to use midjourney for asset creation to reduce costs ignoring the fact that this is copyright infringement, hoping that they will never get cought and nobody will notice.
Personally I find it disgusting.
This is the point where you're supposed to blow the whistle to applicable authorities
 
This is the point where you're supposed to blow the whistle to applicable authorities

I don't believe it's settled case law as to whether or not it is copyright infringement. Also that at this point I believe the law suits are targeting the AI companies themselves (eg. Midjourney directly) and not the companies/individuals using the tools.

Unless he's saying the company is violating Midjourney's terms such as commercial usage on their free tier or just doesn't want the optics involved of using AI tools.
 
What kind of authorities though bother?
I'm assuming there's some sort of authorities that deal with copyright infringements (giving the benefit of assuming Nesh is right in his claim) in most countries, be it court or something else.
 
I'm assuming there's some sort of authorities that deal with copyright infringements (giving the benefit of assuming Nesh is right in his claim) in most countries, be it court or something else.
There are two things I want to point. In practice, Midjourney is a stealing of other people's work and they ate making real artists reduntant. Regarding if its copyright infridgement, that depends on how the law has eventually being drafted to define it as such or not. Its a complicated subject in law making and companies are making terms and conditions deliverately so complicated and so grey that it makes it impossible to find justice. My company doesnt care anyways. When we were discussing it once, the response was, "nobody will know anyways"
 
There are two things I want to point. In practice, Midjourney is a stealing of other people's work and they ate making real artists reduntant. Regarding if its copyright infridgement, that depends on how the law has eventually being drafted to define it as such or not. Its a complicated subject in law making and companies are making terms and conditions deliverately so complicated and so grey that it makes it impossible to find justice. My company doesnt care anyways. When we were discussing it once, the response was, "nobody will know anyways"
Wouldn't be so sure, considering the recent Valve statement on AI generated content in games submitted to Steam. If Valve can recognize and/or implement methods to prevent their use (even if it's not 100% ironclad), why couldn't others?
 
Wouldn't be so sure, considering the recent Valve statement on AI generated content in games submitted to Steam. If Valve can recognize and/or implement methods to prevent their use (even if it's not 100% ironclad), why couldn't others?
To be honest I hope they can and will. I m very disgusted by it. The board of directors make disastrous bussiness decisions, then throw all the weight on employees and then try to replace them with artwork stealing AI for quick bucks. Their argument is whats the difference between an artist using references and an AI creating art and touching it a bit?
 
There's a difference between the moral state of ML art-duplicators and the legal state. Morally MidJourney et al are sophisticated image generators trained on content without consent which every artist would likely refuse. Legally, they are in limbo as unregulated. In the UK as elsewhere around the world, AI* tech is being considered by governments for whatever level of regulation. Until laws come into effect, I doubt anyone can hold a company legally responsible for unfair dismissal by replacement with tech. The whole Industrial Revolution was about replacing people with machines.

* AI is a really dumb term. There's nothing Intelligent about these things; they are just a different types of computation. There is a current deep-fake scandal in the UK, a popular economic advisor being cloned in a video advert for a scam investment. Intelligence can't make fake videos of people! No amount of human inteliigence can deep-fake people. Best you could do is hire a lookalike. Deep-fake videos don't apply intelligence but algorithms. Understanding the implications of those videos and choosing whether to create them or not...that's intelligence!
 
August 19, 2023
Friday a judge ruled AI generated art in a particular case could not be copyrighted even though the author created the algorithm and logic. The judge cited the absence of any "guiding human hand" as the primary reason. The article goes on give an example of the flip side where "a woman compiled a book from notebooks she’d filled with “words she believed were dictated to her” by a supernatural “voice” was worthy of copyright."

If I understand correctly in this case, the judge is saying copyright determination is based on inclusion/non-inclusion of external "data" used by the AI algorithm? Since AI algorithms will become mainstream for any company's research and product development does this imply only data points originating internally can be used if seeking a product copyright?

Image of the art creator work that can't be copyrighted. He is appealing the decision.
Screen_Shot_2022_02_21_at_10.05.17_AM.png
 
* AI is a really dumb term. There's nothing Intelligent about these things; they are just a different types of computation. There is a current deep-fake scandal in the UK, a popular economic advisor being cloned in a video advert for a scam investment. Intelligence can't make fake videos of people! No amount of human inteliigence can deep-fake people. Best you could do is hire a lookalike. Deep-fake videos don't apply intelligence but algorithms. Understanding the implications of those videos and choosing whether to create them or not...that's intelligence!

The human brain is just different types of computation. Intelligence doesn't exist and we wouldn't know how to define it if it did! All hail rocks that compute.

Regardless, "realtime" AI content generation is just useless tech bro hype. That stupid assed "NPC AI" startup that's raised all those millions will crash and burn just like any number of useless startups.

Not that AI content generation isn't going to be in games. It uhh, already is, Cyberpunk shipped years ago now and uses AI driven NPC animation, Gran Turismo 7 it seems already uses AI art asset generation, Asssassin's Creed has been using AI suggested camera positions for cutscenes for quite a while now, etc. Devs have been on this way before idiot tech bros, there's already a growing list of truly useful projects and tools for AI content generation that's offline.

Because you don't want runtime generation. That costs CPU/GPU cycles, which are precious no matter if it's local or in the cloud. It also brings up either unpredictable quality issues, far better to generate and check. We're a long way from having both reliable enough quality and enough freely available compute power to have infinite realtime generation of a compelling game. Check back a decade from now and we'll see where we are.
 
The human brain is just different types of computation.
It's not a computer - it doesn't calculate numbers. Its data model is dependent on deconstructing everything into quantities and it can just compare stuff in a completely different domain. That in and of itself doesn't define intelligence though.
Intelligence doesn't exist
Speak for yourself! :p
Not that AI content generation isn't going to be in games
This is where 'AI' doesn't make sense as a term IMO. All these techniques are just another type of number-crunching algorithm, not intelligence.
 
Because you don't want runtime generation. That costs CPU/GPU cycles, which are precious no matter if it's local or in the cloud. It also brings up either unpredictable quality issues, far better to generate and check. We're a long way from having both reliable enough quality and enough freely available compute power to have infinite realtime generation of a compelling game. Check back a decade from now and we'll see where we are.

With some exceptions. GoW Ragnarok does use AI upscaling at runtime to generate the highest mipmaps of textures on PS5, in order to save on storage.
 
And here is a reason why nVidia is selling 40 billions of hardware...

That is a great demonstration of AI.

This is a hilariously bad demo, the kind of thing that never comes out and went out of style as a tech demo over a decade ago after even the common observers started to twig to this stuff not coming out anytime soon. The voice is terrible, 0 stars, the responses aspire to the heights of chatgpt 2, no serious gamedev would use this with a straight face. It's a great example of Nvidia experiencing too much success and starting to throw money into a dumpster fire just like all tech companies that get big enough eventually start doing. (That being said I wouldn't be surprised if Ubisoft makes an "experimental" game with this, because they'll throw money at literally anything new).

In some real news this years GDC is filled with talks about actually useful uses of machine learning in gamedev. Neural Networks for a Civ style game is super welcome, these games have never had good AI. Photo based facial rig creation is going to be a huge timesaver. Also from last year the presentation on Nightingale's AI animation: https://www.gdcvault.com/play/1028813/AI-Summit-Living-in-Procedural

AI voices are good temporary placeholders for seeing how a line might sound out loud and have accidentally been included in stuff like that new Prince of Persia game (they didn't sub in the VA lines before shipping lol). I imagine voice actors will be hearing them in booths so they have lines to respond to and bounce off of soon if they're not already. Further the Screen Actors Guild has even codified VA rights for AI voice cloning, I wouldn't doubt starting 2025 we'll be seeing some pre-processed AI voices used to expand voice actor's line reading and such out to more variations on lines and the like. I've already heard someone do this for a Skyrim mod, and the AI voices sound alright enough for Bethesda's typically vacant, emotionless delivery they manage to get out their voice actors as it is, I wouldn't be surprised if the best of these tools were capable of better though, Bethesda game voice acting is a pretty low bar.

I do know Ubisoft has an ML tool for auto line completion suggestions in writing dialogue, though based on what I've tried and seen of Chat GPT and the like I'm not sure how useful this is yet, as the most entertaining I've seen is when the AI produces something hilariously bizarre and surreal.
 
Last edited:
Back
Top