Now that ChatGPT is openly available online, I've been checking in on the system's knowledge of virtual worlds — and yep, it's still confused. And worse it's actively confusing what I've written about virtual worlds. Ask it, "How does Wagner James Au define the Metaverse?", and you get something like:
[He] defines the Metaverse as a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual reality. This definition emphasizes the interconnectedness of virtual spaces and their integration with our physical world, suggesting a seamless blending of digital and physical experiences.
Like the saying goes, this is not right, it's not even wrong. I don't know how ChatGPT came up with this muddy extrusion, when I painstakingly researched and put my definition on the web (let alone my book, let alone several online articles about my book), starting years ago:
The Metaverse is a vast, immersive virtual world simultaneously accessible by millions of users through VR and other devices, highly customizable avatars and powerful experience creation tools. It is integrated with the real world economy and compatible with external technology.
A properly targeted Google search leads to that definition, but ChatGPT's authoritative, mansplain-y format is meant to ensure confidence in its answers. (Even the fine print qualification, "ChatGPT can make mistakes", belies its rampant potential for off-base laziness.)
It's amusing to read AI evangelists assert that programs like ChatGPT will soon replace writers, when I mostly see ChatGPT causing more tedious work for writers — making us spend extra time chasing down its errors, turning its mediocre, bland answers into something that's readable.
Longtime journalist/editor Mitch Wagner, who uses ChatGPT as a side assistant tool for spellchecking and a thesaurus reference while he's writing his own articles, made some similar points recently:
Some ways I find ChatGPT and other generative AI useful today:
– Generating questions for interviews. ChatGPT is surprisingly great at that.
– Generating images.
– Occasionally writing draft introductions to articles, as well as conclusions, descriptions and summaries. I’ve always had trouble writing that kind of thing. I don’t use the version ChatGPT generates—I tear that up and write my own—but ChatGPT gets me started. I don’t do this often, but I’m grateful when I do.
– Casual low-stakes queries, when I remember to use ChatGPT for that. “What was the name of the movie that was set in a boarding house for actresses that starred Katherine Hepburn?” “Stage Door.” “Was Lucille Ball in that one too?” “Yes.” “Was that Katherine Hepburn’s first movie?” “No.” And ChatGPT provided some additional information. I probably could have gotten that information from Google, but ChatGPT was faster.
– I find otter.ai extremely useful for transcriptions, likewise Grammarly for proofreading. Do those applications use GenAI? I don’t know.My big problem, and the reason I don’t us ChatGPT more, is that ChatGPT lies. Not only that, but it lies convincingly. A convincing liar is even worse than a liar. I don’t have much use for an information source that I can’t trust. I don’t see an obvious way to solve this problem.
The trust piece is the worst part, especially when ChatGPT bullshits so confidently. When a writer puts their byline on a work, we're effectively saying that we stand behind every word. Any additions by ChatGPT force us to recheck every word in its answers — even when the answers are about our own writing.
Read More: nwn.blogs.com