Interesting Axios story illustrates just how much Silicon Valley has shifted its interest and investment into Generative AI startups like OpenAI and Midjourney away from metaverse platforms and related technology:
According to PitchBook data compiled by Axios Media Deals’ Tim Baysinger, through March 16, 2022, companies that played in the metaverse or web3 space had raised nearly $2 billion in funding.
So far this year, metaverse and web3 companies have raised $586.7 million, a bit more than a quarter of last year’s total. The totals for generative AI companies are the inverse: Through March 16, 2022, the generative AI space saw $612.8 million in funding. This year, it’s up to $2.3 billion.
Driving the news: In a note Tuesday announcing Meta would lay off 10,000 more employees, Zuckerberg spotlighted AI work and reduced the metaverse to an “also.” “Our single largest investment is in advancing AI and building it into every one of our products,” Zuckerberg wrote. “Our leading work building the metaverse and shaping the next generation of computing platforms also remains central to defining the future of social connection.”
As I’ve explained before, conflating web3 with the Metaverse is a huge mistake, as is assuming Meta leads the metaverse industry. That aside, it’s clearly the case that the Valley has shifted its buzz toward generative AI.
Is that smart? Obviously I’m biased when I say this, but there’s already 520 million+ active users across many metaverse platforms, while the Metaverse’s addressable market is at minimum everyone who regularly enjoys muti-user immersive experiences (i.e. 3D games online), roughly 1-2 billion people.
On the other side, there’s several reasons to believe Generative AI is not as transformative as its most bullish boosters assume. For instance:
It’s often just an iterative version of technology we already have, and its shortcomings quickly become apparent in many contexts:
When you ask ChatGPT a question, you get a response that fits the patterns of the probability distribution of language that the model has seen before. It does not reflect knowledge, facts, or insights. And to make this even more fun, in that compression of language patterns, we also magnify the bias in the underlying language… [and] because of the way these models are designed, they are at best, a representation of the average language used on the internet. By design, ChatGPT aspires to be the most mediocre web content you can imagine.
It’s going to contribute to a security/hacking nightmare:
[Philip Rosedale’s] interest isn’t exactly about Nostr becoming an alternative to Facebook or whatever, but being a solution to a completely different but equally concerning problem in technology: The growing power of AI programs to spoof or deep fake real people. “I think that stuff is going to become an ultraviolet catastrophe in the next year,” he tells me. “Maybe less than a year because of the AIs.”
You’ve probably seen audio recordings like this, where a deep fake is able to imitate Obama and other public figures. That’s all fun and games at the moment, but what happens when that same technology is used to initiate a good friend of yours — and then that “friend” calls you up, telling you they’re stranded in a foreign country, and they need you to wire them $2500?
“I think one of the things that’s going to happen with AI is that all our messages are gonna become [spoofable by AI deep fakes]– we can’t trust them anymore,” as Philip Rosedale puts it.
In terms of 3D graphics, it’s probably not going to be a killer app in game/metaverse development:
“AI isn’t going to affect any field that doesn’t have a giant database of free (stolen) training data for it to absorb,” she explained. “There aren’t enough 3d models in existence for basically any model to eat and spit out anything usable. Even most bespoke, handmade 3d model generation algorithms spit out models that are completely unusable in games because the logic behind character creation and topology is extremely precise and needs to be carefully thought out. So: it’s not going to change it.”
It’s on a collision course with copyright/intellectual disaster:
The Lensa app has gone viral in recent weeks, with much excitement over its new AI-driven “magic avatars” feature.
Small problem: It’s not exactly magic nor purely artificial intelligence. Instead, to create these avatars, the app is apparently scraping up artists’ images without their consent. The image appropriation is so blatant in many cases, the Lensa-generated images even include the original artist’s signature…
“I think they didn’t think artists would stand up for themselves because we don’t [have] industry labels the way the music industry does,” Lauren tells me. She says that, because Stable Diffusion, the AI company which provides Lensa’s neural network, is very careful about how the platform samples and trains from recorded music.
“The fact that they do so with their music model shows they are well aware of copyright (I mean it’s a basic concept, anyone who isn’t a little kid is aware of copyright), and that it’s not something that was too complicated to implement.”
And so on. For the most part, I strongly suspect generative AI is mainly going to improve on existing applications — while also introducing costly new problems like those I just mentioned and more.
But to be clear, I do think much of generative AI is very exciting and will lead to some extremely cool use cases, such as this one:
Imagine entire digital personae you can engage with that are directly based on yourself. Or for that matter, NPCs directly based on novelists, poets, and public speakers from history and fiction.
Michelle agrees on that front:
“This is the stuff I think that has the most interesting ramifications: more broadly, more immersive human / computer interface loops, from conversation with virtual therapists to in-game interactions for virtual worlds, given there is user input, AI could be used to train highly customizable responses or generate unique storylines per use.”
In other words: Ironically, some of the best applications of generative AI will be as middleware inside metaverse platforms.
Read More: nwn.blogs.com