By Stuart Kerr, Technology Correspondent
Published: 28 June 2025
Last Updated: 28 July 2025
Contact: liveaiwire@gmail.com | Twitter: @LiveAIWire
Author Bio: About Stuart Kerr
Artificial intelligence is reshaping how stories are written, edited, and distributed. From newsrooms to ad agencies, AI tools promise speed and scale—but they also stir debate over bias, job loss, and editorial integrity. As machines begin creating the content we consume, the media world faces a pivotal question: are we enhancing human creativity or replacing it?
From Deadlines to Data Models
AI’s growing presence in media is no longer hypothetical. In 2023, Google introduced Genesis, a prototype aimed at helping journalists generate article drafts. Early demonstrations for The New York Times and The Washington Post sparked industry-wide concern over editorial integrity and potential job displacement. Vanity Fair reported that some newsroom staff saw Genesis not as a tool—but as a threat.
Yet, for many outlets under pressure to do more with less, the appeal is clear. Generative models like OpenAI’s ChatGPT have found favour with marketing firms and bloggers alike, offering cost-effective ways to produce emails, captions, and blog content. At TikTok, generative AI is now being used not just to recommend content—but to help build it. TechCrunch revealed that TikTok has rolled out tools that label AI-generated media automatically, a nod to growing ethical demands for transparency.
The implications stretch beyond productivity. AI-generated content is increasingly indistinguishable from human work. One example from our coverage—Digital Necromancy: AI and the Ethics of Reviving the Dead—explores how deepfake technology is used to recreate deceased figures in video and voice. These advances challenge traditional boundaries of authorship and consent.
Real-World Adoption and Ethical Tightropes
AI content systems excel at speed. The Associated Press, for instance, generates thousands of automated earnings reports per year with over 90% accuracy. But accuracy is only half the story. A 2024 study from the Reuters Institute showed that while audiences appreciated efficiency, they distrusted stories not authored by humans. In fact, when AI involvement wasn’t disclosed, trust ratings dropped by up to 20%.
Columbia Journalism Review echoed these findings, highlighting AI’s persistent tendency to hallucinate facts or subtly misrepresent tone. Claire Leibowicz of the Partnership on AI has been vocal in calling for independent audits, noting: “AI is a support mechanism, not a substitute for editorial judgement.”
In our article Generative AI in Journalism: The Human Cost of Automation, we explored how these tools are already reshaping workflows. BBC journalists now use AI for first drafts, with human editors cleaning up tone and narrative. This hybrid model is fast becoming the norm.
Misinformation, Deepfakes, and the Battle for Truth
While AI may reduce overheads, it introduces new dangers—particularly around misinformation. A 2024 incident, widely covered by The Verge, saw a synthetic video depicting a European political figure go viral before being publicly debunked. The incident accumulated over 10 million views.
AI-generated scams and fraudulent content are multiplying. Our feature The AI Scam Epidemic outlines how synthetic voices, images, and emails are increasingly used in phishing attacks, financial fraud, and impersonation.
Two recent institutional reports help quantify the growing risks. The Reuters Institute’s PDF report, Generative AI and News Audiences (2024), found that 58% of consumers could not reliably identify AI-generated articles when compared to human ones. Moreover, trust in undisclosed AI authorship remained critically low across all surveyed nations.
Economic Impact: Job Loss or Transformation?
Beyond ethical concerns, AI’s economic impact looms large. A 2025 study by the World Association of News Publishers predicts that 20% of entry-level journalism roles may be automated by 2030. However, retraining programs are emerging. The BBC now offers courses in AI-assisted editing and prompt engineering. New roles—such as “AI content auditor” and “synthetic verification editor”—are already being advertised.
PDF data from the Reuters Institute’s 2025 Journalism and Tech Trends Report suggests that hybrid newsrooms—those combining AI efficiency with human oversight—are gaining market share over purely automated or purely traditional outlets.
What’s Next: Regulation, Transparency, and Open Models
As capabilities increase, so do calls for oversight. The EU’s AI Act, now in force, mandates that any AI-generated content be labeled clearly. Non-compliance could carry fines of up to €35 million. Smaller newsrooms lacking infrastructure may struggle to meet these requirements.
On the technical front, a shift toward open-source models like Meta’s LLaMA has made content tools more accessible. Yet with this access comes responsibility. Without embedded ethical frameworks, the cost of misuse rises.
More platforms now integrate native detection, labelling, and explainability features into their AI stacks. Our editorial on The AI Scam Epidemic also covers tools under development to track and label algorithmic content at scale.
Final Thoughts: Empowerment Through Caution
AI content creation is not inherently good or bad—it is powerful. It can democratise publishing, enhance storytelling, and give voice to marginalised creators. But its misuse—whether through bias, misinformation, or opacity—threatens the public trust that journalism relies on.
As Columbia Journalism Review warns, “[AI tools] require just as much journalistic judgement, if not more, than traditional methods.” In the years ahead, that balance—between speed and integrity—will determine who thrives and who fades.
About the Author
Stuart Kerr is the Technology Correspondent for LiveAIWire. He writes about artificial intelligence, ethics, and how technology is reshaping everyday life. Read more