What to expect from next-gen graphics [XBSX, PS5]

You can, if you do have procedural generation with coarse artist control.
Yes, but as already mentioned, that doesn't exist yet. You've asserted that we absolutely will have bigger, better, more detailed worlds. I'm saying that they can only happen IF we get better tools. IF we don't, then the content will be limited by cost to create.

The graph shows scatter at the right, present side. If we would draw a more accurate curve instead a straight line, it would point downwards, with the peak at 2005.
Just to mention - no idea how accurate this data is and won't argue. There is a need to reduce costs everywhere and always in any case.
That because the range of games has increased dramatically and includes lower budget indies and smaller titles in the age of digital downloads. At the top end, the AAA budget, the place people will be wanting to see content variety really pushed, we have the exponential increase. For Spider-Man 2 to have the order of magnitude more content enabled by the SSD not being the limiting factor, of the quality that you see in the Heretic, there'll be a significant increase in cost.

Source : https://www.gamasutra.com/blogs/RaphKoster/20180117/313211/The_cost_of_games.php

Note that that exponential growth curve is including the constant improvement in creative tools. We already have the likes of Allegorithmic's Substance Designer since 2010 instead of artists creating material textures by hand in PS. So despite reducing the cost to create a texture considerably, the overall cost to create the assets for a game has increased; content creation cost is growing faster than the content-creation savings.

There is of course the possibility of a paradigm shift in content creation, but that's largely theoretical at this point. All the procedural techs we'd be looking to have existed since the PS360 era without gaining widespread adoption. What are the limiting factors and will those change?
 
Yes, but as already mentioned, that doesn't exist yet. You've asserted that we absolutely will have bigger, better, more detailed worlds. I'm saying that they can only happen IF we get better tools. IF we don't, then the content will be limited by cost to create.

That because the range of games has increased dramatically and includes lower budget indies and smaller titles in the age of digital downloads. At the top end, the AAA budget, the place people will be wanting to see content variety really pushed, we have the exponential increase. For Spider-Man 2 to have the order of magnitude more content enabled by the SSD not being the limiting factor, of the quality that you see in the Heretic, there'll be a significant increase in cost.

Source : https://www.gamasutra.com/blogs/RaphKoster/20180117/313211/The_cost_of_games.php

Note that that exponential growth curve is including the constant improvement in creative tools. We already have the likes of Allegorithmic's Substance Designer since 2010 instead of artists creating material textures by hand in PS. So despite reducing the cost to create a texture considerably, the overall cost to create the assets for a game has increased; content creation cost is growing faster than the content-creation savings.

There is of course the possibility of a paradigm shift in content creation, but that's largely theoretical at this point. All the procedural techs we'd be looking to have existed since the PS360 era without gaining widespread adoption. What are the limiting factors and will those change?


Developers are curently using procedural for creation.
 
Easier to implement more detail, but significantly more labor work to create it.

edit: I can see it with my self....You cant impress anymore with a character that was considered ultra impressive in the PS2 or PS3 days
But where this ties in with the lower-cost games in that graph I linked to is that this quality can be targeted in indie games, say, with low-poly art-style, but with good lighting it still looks good. Better tools means creating these assets is a lot easier (cheaper) than back in the day when it was all high-tech. Games around $1 million in 1990 can have similar looking games now for far, far below that.
 
You only have to look at the end credits for modern games to understand that they're vastly more expensive to develop than they were in the past. Procedural generation of assets will basically be a necessity to create game assets on high volume. I also wonder if photogrammetry makes asset creation easier. I don't know what the workflow compares for things like rocks, foliage, and more simple static objects like chairs, desks, lamps I'm sure photogrammetry has its own challenges, but maybe on the whole it is easier than crafting from scratch.

Edit:

I also wonder if we'll get into asset-sharing territory this gen. Hey, that's the chair from Assassin's Creed! If you're Ubisoft, maybe you could build huge asset libraries that could be shared amongst all of the teams working on Assassin's creed, Tom Clancy, Watchdogs etc. People may find it annoying if they can spot the shared assets, but ultimately if they want scale, then re-inventing things is a lot of money poorly spent.
 
Yes, but as already mentioned, that doesn't exist yet. You've asserted that we absolutely will have bigger, better, more detailed worlds. I'm saying that they can only happen IF we get better tools. IF we don't, then the content will be limited by cost to create.
Ok, then we just agree :)
The context of my thoughts is mostly about potential future progress. In this case i think more towards the end of next gen, not the start. That's usually the time until such demos become realistic for real games.

What are the limiting factors and will those change?
Procedural content tends to be boring.
Meshes are very hard to generate and modify. For example, robust CSG operations are quite unpractical with meshes.
Tools like SD are nice but still restricted and limited to a certain aspect, in this case bitmap data. But we want to general tools that work for both geometry and texture.

The solution can only be to combine different representations of geometry. Remebering the promises from Unlimited Detail or Atomontage, we think of: Easy LOD, texture = geometry, easy CSG, easy photogrammetry and so forth.
What those guys do wrong is their aim to replace meshes, although they are a mach better and optimized representation of surface in many cases.
So to get the best of all, we want to convert forth and back between voxels, SDF, point clouds, triangles, or whatever you want and suits the task.
This is possible, but we loose exact artist control because of resampling. Final triangles and texels are no longer the ones the artist made initially.
And this was unacceptable in the low poly days, but with details increasing more and more, this issue matters less and less.
At current day it seems likely we can even get better results from automated processing than manual tweaking, already seeing this e.g. in high detail tools like ZBrush where the artist no longer cares about edge flow but just shape.
Additionally, we get more efficient data. (Removing non visible overlaps, can adapt resolution to given platform performance budgets)
And finally, full automatic generation of LODs.

Main problem: Not easily compatible with current optimizations like instancing of geometry and texture.


...maybe i slipped from 'procedural content' to geometry processing, but yes - i think the main reason this did not take off yet is a lack in geometry research present in games industry. There is Simplygon, which is good, but like Substance Designer it is not yet powerful enough to really automate all content processing.
Working on this crap for 3 years now i can say it is hard but surely promising. The longer i work on it, the more exciting options i see. It will not work for absolutely everything (e.g. characters), but for most of it i guess.
 
... oh, and assuming we ever get there, there are all those other cost saving promises:

Correct lighting out of the box, no more shadow map issues, no more manual probe placement and other tuning.
Far fetched ideas like characters moving automatically without a need for animation.

So i see much more options to reduce costs than to increase them for the future.
 
See the reflection of one people at 13 seconds he is not inside the room.

ELorna1WkAA7RGc


3RtI2JR.jpg


Not the same marble on the stair.
Took me a while to realize the bottom picture was a render. Nicely ray traced render. Interested to see this (maybe) run in realtime. I'm not even sure if I want to play a game that looks like that. Shit is going to give me nightmares.
 
See the reflection of one people at 13 seconds he is not inside the room.
At first I took it as video footage and thought it was just the cameraman's reflection on those walls, but there's the footprints appearing on the floor and upon closer look there are obvious rendering artifacts (AA?) on the light switches, for example.

In the second scene, the paintings are also not the same as in the photograph.

Very impressive stuff. Even better than the best Unreal Engine 4 architect videos out there.
 
I also wonder if we'll get into asset-sharing territory this gen. Hey, that's the chair from Assassin's Creed! If you're Ubisoft, maybe you could build huge asset libraries that could be shared amongst all of the teams working on Assassin's creed, Tom Clancy, Watchdogs etc. People may find it annoying if they can spot the shared assets, but ultimately if they want scale, then re-inventing things is a lot of money poorly spent.
We talked about this ten years ago. Given a fairly homogenised art-style, it's a little curious why it hasn't happened. A contemporary city is going to be populated with the same general stuff, so why have 10 devs each designing 10 couches that repeat constantly?
 
... oh, and assuming we ever get there, there are all those other cost saving promises:

Correct lighting out of the box, no more shadow map issues, no more manual probe placement and other tuning.
Far fetched ideas like characters moving automatically without a need for animation.

So i see much more options to reduce costs than to increase them for the future.

To be honest, I expect the costs to increase no matter what. There will hopefully be a push to reduce crunch to something sane, but overall I expect better process will just mean more content, not the same content for cheaper. I doubt teams will shrink, at least at the AAA scale. If you have 10 devs and give them procedural tools you'll just have 10 devs producing mroe content in the same time, and teams will likely continue to grow.
 
To be honest, I expect the costs to increase no matter what. There will hopefully be a push to reduce crunch to something sane, but overall I expect better process will just mean more content, not the same content for cheaper. I doubt teams will shrink, at least at the AAA scale. If you have 10 devs and give them procedural tools you'll just have 10 devs producing mroe content in the same time, and teams will likely continue to grow.
You're probably right. At least about AAA.
But at some point some people should realize more content is just pointless and move elsewhere. Or is it just me who is tired of open worlds, with games that often feel more like working than having fun?
This open world boredom has to end. I mean, i wanted it too - huge worlds and all this, but now when we have it i see it does not add so much. Those worlds feel repetive with only minor variation. No matter if i look north or south, it's all the same until the horizont.
So i'm often not happy with the games that i can get currently. I want a smaller world where i do not get lost, and i also want a smaller and shorter game because i have no time for an alternate fantasy reality. No side quests, no multiple endings, no fuzz.

Assuming i'm not alone here, don't you see a possibility that smaller / indie teams could benefit the most from procedural content? I mean, AAA can do it with manpower, but indies can't do many things at all. So for them the changes would have a larger impact.
I hope competive but smaller teams remain even with increasing standards. If we get to the point where there are just 3 huge companies left, and all of them use U4, i'm out.

To come back on topic, there is one interesting thing i realize just now: Years ago 'better' or 'next-gen' graphics meant mainly technology. More colors and pixels, more triangles, better lighting, texture resolutions and all this.
But nowadays it seems mostly about content. More assets, better assets, better artists to maximize that cutscene, larger worlds.
This feels somewhat wrong because we are not there yet with technology. A tiny but realistic looking game could blow this Unity demo away, at least for me.
 
We talked about this ten years ago. Given a fairly homogenised art-style, it's a little curious why it hasn't happened. A contemporary city is going to be populated with the same general stuff, so why have 10 devs each designing 10 couches that repeat constantly?
I assume most of the work is on environments and high quality character models, which are generally pretty unique. Since couches and other things are easy, they probably just get the new hires to model stuff like that. Would be cool to reuse stuff across different games but people still have to place all those objects unless there is good procedure generation of some sort.
 
There is this tools too
Currently, most procedural tools seems 'just' about placement. I guess that's only the beginning with more options about true creation to come?
Automatic placement alone is not so super helpful if someone still has to model all this stuff at first.
I remember papers about procedural furniture up to interior and complete houses, and guess Houdini can do things like that?

Which brings us back to questioning the real need of such 'boring' environments?
For example, technically i'm very interested in the procedural cities of Star Citizen. But would i like to go there in a game? Not really.

In this sense the Unity video is really great. Lots of the cool static environment is also made using vector field tracing, if you look close. That's very little manual artist work - just design the vector field and set up some geometry to flow through it. It shows what happens if such technology is used properly by artists. Very inspiring.
I have used vector field tracing too, on the surface of meshes. Goal was quadrangulation, but i already thought: Hey cool, could use this to drive automatic tubes or cables through a science fiction environment with no work. Drag some control points until it looks good. (No new idea - vegetation growing systems work similar.)

Then there we see how they clone a piece of geometry along a spline, meaning the dark scene with that yellowish altar thing. Looks great, a bit like some stylish pixel art mobile game.
I remember i did this 20 years ago when games used flat triangles but i experimented with curvy bezier patches. It gave my cool alien structures back then, and again: very little work.

And finally there is lots of stuff that looks like 3D fractals. Math at work, not an army of artists. Demo scene meets games.
They made so many procedural wet dreams come true here, and the result is just awesome. Yes, i want to go there! :)

But the point is: Ignoring characters and cinematic direction, this does NOT look like content that is unpractically expensive to make. To me it looks like the opposite of that.
 
I assume most of the work is on environments and high quality character models, which are generally pretty unique. Since couches and other things are easy, they probably just get the new hires to model stuff like that. Would be cool to reuse stuff across different games but people still have to place all those objects unless there is good procedure generation of some sort.

I think a lot of the couches type stuff is outsourced now, but it's still very expensive.
 
Back
Top