By Lerato Lebakae
As newsrooms race to keep pace with an ever‑accelerating information cycle, AI has become both a powerful ally and a formidable threat. Its tools can instantly flag dubious claims and verify images, yet those same technologies also fuel hyper‑realistic deepfakes and mass‑produced fake articles, blurring the line between fact and fiction.
The impact of AI on news accuracy and journalism ethics is now front and center, as these systems offer robust fact‑checking and disinformation‑detection capabilities. Yet their rise raises urgent questions around accountability, transparency, and the risk of eroding public trust in the media.
Misinformation and fake news spread swiftly across digital channels, and AI has been hailed as a remedy. Advanced algorithms can scan massive volumes of content in real time, uncover patterns of falsehoods, and even verify facts before publication, dramatically boosting the speed and scope of fact‑checking operations so journalists can respond more rapidly to disinformation.
NewsGuard 2025 reports that since May 2024, the number of websites publishing entirely AI‑generated false articles jumped from 49 to over 600, a 1,127% increase, highlighting how quickly deepfakes and fabricated text can proliferate online.
Meanwhile, a 2024 Reuters Institute survey found that 55% of global news organisations now employ AI‑powered systems, primarily for content verification and automated transcription, underscoring both the promise and the pitfalls of integrating these technologies .
According to Joel Paul’s analysis, AI technologies are increasingly being used to combat the rising issue of misinformation and fraudulent content online. Developers have devised technologies to help identify and counteract the spread of misleading information across digital platforms using modern techniques such as machine learning, natural language processing (NLP), and deep learning.
Paul discusses many AI-powered tools, such as deep learning for image and video verification and automated fact-checking systems, that are proven effective in detecting manipulated information and determining its validity. These tools enable media organisations and fact-checkers to respond to disinformation more quickly and correctly, ultimately helping to sustain truthful and dependable news in the digital age.
However, artificial intelligence is not immune to errors. Machine learning systems are dependent on the quality of the data on which they are trained; if the data on which they trained is biased or inaccurate, the results can be misleading. Moreover, the same AI technologies used to detect fake news can be used to generate it. Deepfakes- hyper-realistic Ai-generated audios and video content are making it hard for people to distinguish between what is real and what is not.
A recent report by The Washington Post highlights the growing role of artificial intelligence in fueling misinformation online. According to the report by journalist Pranshu Verma, AI is increasingly being used to generate fake news content that closely mimics legitimate journalism. These AI-generated articles often spread false information on critical topics such as elections, wars, and natural disasters, misleading readers and undermining public trust in factual reporting.
This sharp rise in AI generated problems within the media demonstrates how AI technologies are being exploited to automate and amplify the spread of disinformation on a global scale.
MISA Lesotho, in collaboration with Code for Africa, has launched the CheckDesk, a ground-breaking effort aimed at assisting Lesotho citizens in detecting and combating the growing threat of disinformation. ChechDesk factcheck the accuracy of content they encounter, whether it’s social media messages, news articles, photos or videos before accepting or sharing it.
One of CheckDesk’s primary capabilities is its capacity to assist in identifying AI-produced information, such as manipulated images , deepfake movies and falsely created text. With the rise of artificial intelligence, misleading content has grown more compelling and difficult to identify. The CheckDesk confronts this difficulty straight on, employing innovative AI-powered verification technologies and digital forensic techniques to unearth the truth behind suspected content.
Importantly, the CheckDesk is run by a team of skilled professionals who have been trained and equipped with resources and knowledge. These experts are available to help citizens verify information, ensuring that people have guidance when navigating the digital space.