G/O Media, an online media company known for its publications Gizmodo, Kotaku, Quartz, Jezebel, and Deadspin, has revealed its intention to embark on “modest testing” of AI-generated content on its websites. This move follows the broader trend within the media industry, as many organizations are exploring the potential of AI implementation. Merrill Brown, G/O Media’s editorial director, stated in an email to staff that the adoption of AI should come as no surprise since it is being considered by “everyone in the media business.”
Merrill Brown defended the move by stating that the adoption of AI should not come as a surprise, as AI has been a topic of consideration for everyone in the media industry. The trial aims to produce a limited number of stories for most of its sites, focusing on lists and data-driven articles. Nevertheless, the decision has sparked outrage among many journalists at G/O Media, who perceive AI content as a devaluation of their work, an additional burden on editors, a threat to the credibility of their outlet, and a source of frustration for their audience.
I see G/O Media’s AI experiment is going well already pic.twitter.com/jFgMLW59vS
— Carli Velocci 👻👽 (@velocciraptor) July 5, 2023
The announcement has raised concerns among unions representing G/O Media and The Onion staff. In a joint statement, they expressed shock at the news, emphasizing that the hard work of journalists cannot be replaced by unreliable AI programs notorious for generating misinformation and plagiarizing the work of real writers. The unions argue that AI, in any form, undermines their mission, demoralizes reporters, and erodes the trust of their audience.
Hello! As you may have seen today, an AI-generated article appeared on io9. I was informed approximately 10 minutes beforehand, and no one at io9 played a part in its editing or publication.
Here is a statement I have sent to G/O Media, alongside a lengthy list of corrections. pic.twitter.com/xlROmxmupA
— James Whitbrook (@Jwhitbrook) July 5, 2023
Notable figures within G/O Media’s publications have also voiced their dissent. Zack Zwiesen, a writer from Kotaku, took to Twitter, expressing his dissatisfaction and urging others to spread the word. Ashley Feinberg, a former Gizmodo employee and internet investigative expert, described the situation as potentially nightmarish.
Our statement on G/O Media’s plan to implement AI content, just days after laying off newsroom members pic.twitter.com/oIeqoRXf4W
— GMG Union (@gmgunion) June 29, 2023
The current development in AI-generated content brings to mind past debates surrounding the role of artists, which unfolded several months ago. It is worth considering that as long as metrics continue to prioritize text characters, publication numbers, traffic volume, and citation indices, the internet will witness an inflated presence of AI-driven content.
On one hand, AI aids in the creation of expansive and elaborate content, filling pages with letters, reports, and certificates, inflating texts to their fullest. On the other hand, AI enables the compression, summarization, and elimination of unnecessary elements, removing excess spam. In the midst of it all, there are individuals diligently measuring traffic, metrics, and indices, overseeing the impact and reception of AI-generated content.
Meanwhile, Bankrate has resumed publishing AI-generated articles, claiming that each piece undergoes meticulous fact-checking and editing by human journalists before being published. The company aims to ensure that its articles are accurate, authoritative, and valuable to its audience. This decision follows a recent incident where several articles were discovered to contain factual errors and apparent instances of plagiarism.
Referred to as a “journalistic disaster” by The Washington Post, the AI’s behavior has been deemed capable of leading to severe consequences, such as academic expulsion or termination of employment for journalists. As a result, Bankrate and its site CNET, both owned by media company Red Ventures and reportedly worth billions of dollars, have indefinitely halted the publication of AI content following the controversy.
A basic examination of Bankrate’s AI-generated content reveals that rudimentary mistakes continue to occur, even as executives advocate for AI usage. These errors are not being effectively identified and rectified by the human staff, leading to their dissemination to unsuspecting readers. For instance, an article discussing the best places to live in Colorado was found to contain inaccurate information. The article claimed that Boulder’s median home price is $1,075,000, which contradicts the Redfin data cited by Bankrate, indicating a lower value. Similarly, the article mentioned Boulder’s average salary as $79,649, whereas the most recent figure from the Bureau of Labor Statistics is $89,593. Moreover, the article inaccurately reported Boulder’s unemployment rate as 3.1 percent, whereas the Bureau of Labor Statistics data cited reveals it to be 2.5 percent. These inaccuracies call into question the reliability of the AI-generated content and raise concerns about the verification process. In response to the incident, the article has been removed, and Bankrate has pledged to update it with the most recent and accurate data, ensuring that all future data studies include the date range when the information was collected.
AI-Driven Journalism Enhances Newsrooms
The integration of AI technology in newsrooms worldwide has been steadily gaining momentum, resulting in the production of financial news, sports stories, weather updates, and traffic reports. Esteemed companies such as Bloomberg, the Washington Post, and Newswire have embraced AI to generate news articles and identify emerging trends.
Charlie Beckett, Director of the media think tank Polis, believes that AI can empower journalists by providing them with enhanced abilities in discovery, creation, and connection. Currently, newsrooms predominantly employ AI in three key areas: news gathering, production, and distribution.
Among the trailblazers in AI-driven journalism is the Stuttgarter Zeitung, which has developed an innovative machine learning system called CrimeMap. This technology effectively categorizes information and can determine the time and location of a crime. CrimeMap is utilized to analyze data, alert journalists to breaking news, viral stories, and unusual data patterns, as well as measure the impact of content produced by media companies.
In Finland, the national broadcaster Yle is leveraging AI to enhance news personalization for its readers. The company has devised a dual-function system known as Voitto, which operates as both a robot journalist and a smart news assistant. By employing machine learning algorithms, Voitto improves recommendations based on users’ reading history, interactions, and direct feedback, ensuring a more tailored news experience.
AI is being employed to tackle the issue of harassment and abuse within newsrooms. Companies like Yle are diligently working towards creating a more inclusive and accessible news environment for their users. The implementation of AI technologies allows for improved content moderation, enabling the detection and mitigation of abusive behaviors and fostering a safer news experience for readers.
AI’s Influence on Journalism
Although the AI couldn’t definitively determine their source, it acknowledged the potential for human mimicry and even speculated that future chatbots might produce text indistinguishable from human writing. Given the notorious reputation of chatbots for manufacturing truth and inventing sources, their reliability as fact-checkers is questionable.
Considering this, an OpenAI text classifier was employed to assess the likelihood of AI generation. The results indicated that two of the pitches and one of the Medium blog posts associated with the student were potentially AI-generated. Subsequently, the student was contacted, confirming that AI technology had indeed assisted in producing the pitches. Unapologetic, the student expressed a belief in harnessing the power of AI to create high-quality content that meets the needs of clients and readers, blending human creativity with AI technology for impactful outcomes.
Although the Observer opted against employing the student as a writer, Newsquest recently advertised an AI-powered reporter role for its local news operation, highlighting the evolving landscape of journalism. The impact of AI on the field remains uncertain, as demonstrated by previous instances where AI-generated articles on health and personal finance were riddled with inaccuracies. BuzzFeed also embarked on utilizing AI to “enhance quizzes,” but the subsequent AI content rollout received criticism for its hackneyed writing style.
In the realm of journalism, questions arise regarding the integration of AI-generated text. Should AI-generated content find its way into articles, should it be disclosed? This topic was recently deliberated in a San Francisco Press Club discussion, where the importance of human-authored news was emphasized. The Observer aligns with this stance, recognizing the significance of human-authored content. These concerns extend to news organizations at large, with colleagues at the Guardian also investigating the broader effects of technology on journalism.
For now, the Observer remains AI-free. As readers explore various news sources, caution is advised when encountering content that resembles promotional material for financial services. The ongoing debate surrounding AI’s impact on journalism continues, and its potential implications demand careful consideration from both journalists and news organizations alike.
Read more related topics:
Read More: mpost.io