Above: The procedurally-generated No Man’s Sky
With the growing popularity of AI-driven 2D image generators, it’s no surprise that we’re starting to see many people speculating about how AI-driven 3D image generators may soon transform games and metaverse platforms. (For instance, well, AM Radio and me.)
However, veteran game industry artist Aura Trilio just hit me with a knowledgeable and refreshing blast of anti-hype, pointing out how difficult (if not impossible) it would be to have an AI which can feasibly create whole 3D worlds:
“AI isn’t going to affect any field that doesn’t have a giant database of free (stolen) training data for it to absorb,” she explained. “There aren’t enough 3d models in existence for basically any model to eat and spit out anything usable. Even most bespoke, handmade 3d model generation algorithms spit out models that are completely unusable in games because the logic behind character creation and topology is extremely precise and needs to be carefully thought out. So: it’s not going to change it.”
But wait: The acclaimed game No Man’s Sky generated a whole galaxy of content through automated procedural generation. Doesn’t that suggest its feasibility for other games/worlds?
Aura argues otherwise:
“No Man’s Sky’s procedural generation has been the subject of extensive criticism over the years and may be its weakest point in a game that has otherwise become pretty strong.
“Not every game has NMS’ goals and NMS’ goals are a big part of why procedural generation is even a good fit for it to begin with, similar to roguelikes. This sort of thing is never one size fits all. Look at Chasm for an example of a game that put a huge amount of years into procgen and faceplanted because it turned out to add nothing to the game and take substantially longer than making it without.
“Games have been using procgen in various ways for literally decades. That’s not new. What’s new here is
- Attempts to sell it as a method to reduce costs (this almost never works) and
- Claims that absolutely everything being procgen is a good idea, usually from people who have no history in procgen and thus don’t understand that it’s a tool. You don’t put frosting on all your food, you don’t build all your systems from procgen from the ground up. The best possible thing you can hope for in that scenario is taking 20 years and ending up with Dwarf Fortress. AAA gamedev doesn’t have that kind of time or money.”
My own thinking based on Dual Universe and other titles is game artists in the future may use AI generated 3D models as a starting point but then do a lot of curation and human editing of the results.
Aura is only somewhat persuaded on that point:
“That could happen but we’re a long, long, long way out from it being feasible. Not just because there’s a massive lack of training data, not just because nearly all 3d model generation results in extremely unusable topology, but also because retopology isn’t in a place where this sort of stuff is worth anyone’s time. The workflow of zbrush -> low poly is just a thousand times more accurate to what people actually need. There are no tools in existence to do what you’re describing on an acceptable level.”
And getting back to her point about a dearth of training data, it would probably take a very huge game publisher using its own 3D data from a huge library of games, to make that feasible, if not desireable:
“The only publishers that are big enough to have the resources to do that would be Activision, EA, Microsoft, etc. So it would have to be one of the very big companies. And then the question on top of that is ‘Can you actually get useful data out of that?’ Throwing literally everything in is going to result in significant inconsistencies because of the different requirements between projects. 3D models need to be very precisely made to work in games. I think you would probably get a tool that output a huge amount of garbage unless you put a colossal amount of work into it.
“For basic forms like rocks, humans, etc, it’s genuinely easier to just make a generation tool yourself than train an AI to do this sort of thing. This is why Speedtree exists.” (See above.)
“Random generation of humans in particular is actually a pretty heavily explored space in games. The cost to make something like that yourself is pretty low compared to the amount of programmer time you’d need to train an AI on what a human — unclothed — even looks like.”
In other words, the game industry has been creating content with automation tools/programs for so long, AI probably won’t actually add anything to the equation, except more time and money, with debatable benefits.
every single post in this thread is a great example of how little ppl understand about how incredibly hard good procgen execution is. I could literally go over this post example by example and list major pitfalls that we have already seen in released games https://t.co/NErcAlQtK3
— Aura ([email protected]) (@MOOMANiBE) December 12, 2022
For more on this topic, Aura has a great Twitter thread (above) undercutting some bold claims by a venture capitalist proclaiming very VC-ishly that AI as The Future of 3D Worlds.
Read More: nwn.blogs.com