Skip to content

AI, Journalism and Fact-checking in Ghana; navigating the maze!

The deployment of artificial intelligence (AI) in the news industry will transform newsroom practice, scale the work of media outlets, improve the efficiency of journalists, and impact the quality of work produced by the media.

Artificial intelligence is already being used in news production, from story discovery and story production to story distribution. Newsrooms are utilising machine learning to analyse massive data to discover patterns, and journalists are creating templates so computers can write stories that are data-based to free them up from routine stories to be able to attend to larger and more complex projects. Also, in other places, newsrooms are using AI to personalise story recommendations to their audience. Research has shown that media outlets have adopted AI as a result of factors including the “recent technological advancements, market pressures partially from the industry’s financial challenges, competitive dynamics with a focus on innovation, and the pervasive sense of uncertainty, hype, and hope surrounding AI.” However, the potential to increase newsroom efficiency has been identified as the central motivator for adopting AI.

The American computer and cognitive scientist John McCarthy is credited as the originator of the term “artificial intelligence” when he discussed the subject at a conference in 1955. He says AI is “the science and engineering of making intelligent machines, especially intelligent computer programs.” Since then, other scientists have proposed various ways of making machines intelligent like human beings. However, the English mathematician Alan Turing is said to have suggested the notion of AI when he proposed in 1950 “The Imitation Game” as the ultimate test of whether a machine was intelligent and could imitate a human being to provide answers to questions that are indistinguishable from those of man. The phrase “The Turing Test” refers to the proposal made by Turing as a way of dealing with the question of whether machines can think.

“I believe that in about fifty years’ time, it will be possible to program computers, with a storage capacity of about 10, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. …I believe that at the end of the century, the use of words and generally educated opinion will have altered so much that one will be able to speak of machine thinking without expecting to be contradicted,” Turing, referred to as the father of modern computer science, wrote.

It is clear today that so much has changed since 1950 when Alan Turing proposed the “Imitation Game” test. There is a flurry of excitement everywhere with the introduction of Open AI’s ChatGPT, GPT-3, and GPT-4, all AI-powered generative language models developed to encourage learning and conversation. Generally, two distinct classifications under AI are Generative AI and Machine Learning. While the two share a common foundation, applications, methodologies, and outcomes, they differ significantly. Machine Learning focuses on learning from data to make predictions or decisions. In other words, Machine Learning refers to the changes in systems that perform tasks associated with artificial intelligence. These tasks include recognition, diagnosis, planning, robot control, and prediction. Generative AI, on the other hand, uses algorithms and models that generate new content such as text, photos, code, videos, 3D renderings, music, and more, which mimics human-like creative processes. The major distinction between the two methodologies is that while Generative AI algorithms are designed to create new data, Machine Learning algorithms analyse existing data.

It is worth noting that an artificial intelligence system learns from experience, uses the learning to reason, recognises images, solves complex problems, understands language and its nuances, and creates perspectives, among others. As evident across the world, every aspect of news production – story discovery, story production, and story distribution, can be affected by machine learning. The adoption of AI in Ghana’s news industry will have a wide range of benefits to journalists and media outlets on three primary levels, (a) production of texts, (b) interaction with the audience, and (c) performance of mundane tasks, including the writing of press releases. For instance, in the automatic production of content, AI Algorithms can convert structured data such as sports results and weather forecasts into informative and narrative texts that can lead to the production of stories with or without the intervention of journalists.

The Paris Charter on AI and Journalism, adopted on November 10, 2023, captured the essence of the age when it said: “AI, spanning from basic automation to analytical and creative systems, introduces a new category of technologies with an unparalleled capacity to intersect with human thought, knowledge, and creativity. It represents a considerable shift in information gathering, truth-seeking, storytelling, and the dissemination of ideas. As such, it will profoundly alter the technical, economic, and social conditions of journalism and editorial practice.”

Similarly, in the area of fact-checking, AI tools can help Ghanaian fact-checking bodies and media outlets with the detection of trending topics for media and information literacy (MIL) interventions, comments moderation, collection of information, identification of mis-and disinformation, verification of mis-and disinformation content, and the automatic translation of texts and audios. AI can make Ghanaian journalists and researchers more efficient when effectively employed and give them the space to pursue life-changing stories and interviews that these AI-powered tools cannot generate.

There are concerns that AI will replace news workers or the work they do across the world. At the moment, AI aids journalists, but no one can guarantee that this will be the state of affairs in the coming years. What is true is that AI has sufficiently matured to replace the practice described as “armchair journalism,” whereby journalists obtain news reports from the newsroom without going out to the field to interview the sources. It is these routine tasks, such as writing press releases and other stories devoid of human emotions, that AI would fade because these tools have been trained to perform such tasks – even better. Although AI tools can write better stories, some tasks are best suited to humans, and this includes situations where complex communication is needed or expert thinking is required. Ghanaian journalists should endeavor to sharpen their skills and learn how the available AI platforms operate by going beyond the “she said” and “he said” kind of news reporting. Putting the all-important human touch to your reports can distinguish journalists’ work from AI tools. 

Today, better-resourced media organizations in Europe, America, and Asia, such as the Associated Press and Bloomberg, employ AI in their story production steps with automated content generation to improve their input, output quality, and speed.  However, most media outlets, especially smaller ones, extensively use AI products and infrastructure developed by major tech companies like Google, Amazon, and Microsoft. The newsrooms in Ghana can equally use third-party solutions from platform companies as a story discovery and reporting tool. Again, media outlets, including the New York Times, have invested in artificial intelligence by hiring people specialized in artificial intelligence, machine learning, data science, and mobile engineering. In a 2016 memo, the Bloomberg Editor-in-Chief, John Micklethwait, told his staff that: “automated journalism has the potential to make all our jobs more interesting…The time spent laboriously trying to chase down facts can be spent trying to explain them. We can impose order, transparency, and rigor in a field that is something of a wild west now.” This advice should be considered by Ghanaian journalists who are mindful of the great need to protect democratic structures in the country through the promotion of public accountability and good governance.

While encouraging the use of AI in the news production process in Ghana, we need to acknowledge that there is an army of people scheming right now to deploy the same tools to cause havoc around the world – to destabilise peaceful countries, defame political opponents, misinform the public, remove democratically elected governments, twist the arm of voters, and influence opinion in favour of their paymasters. These things should be expected; we will see many of these things around us in the coming years. Notwithstanding the narrow nature of today’s artificial intelligence, there is a need for an ethical framework to guide AI use in journalism. The framework should focus on steps to evaluate the quality of data and algorithms, analyze potential bias in models, and ensure transparency in using AI-based tools.

Albeit its benefits, there are grave concerns and questions about the quality of the outputs created by these AI tools, the erosion of ethical principles and core values of journalism, and the challenge to the right to information. The Paris Charter on AI and Journalism noted that: AI systems have the potential, depending on their design, governance, and application, to revolutionize the global information landscape. However, they also present a structural challenge to the right to information. The right to information flows from the freedom to seek, receive, and access reliable information. It is rooted in the international legal framework, including the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and the International Partnership for Information and Democracy. This right underpins the fundamental freedoms of opinion and expression.”

These ethical concerns can effectively be addressed by programmers of AI tools, Ghanaian media outlets, and policymakers backed by an appropriate legal infrastructure. Ghanaian media outlets need to be deliberate about the way they use platform AI to promote the public good. Additionally, media outlets and journalists need to be transparent to their readers when they use AI to produce or distribute their content – this is important. Media outlets need to draw the line between authentic and synthetic content generated with the help of AI tools. In other words, local media outlets should avoid using AI-generated content mimicking real-world captures and recordings to mislead the public.

As part of efforts to develop an ethical framework, the Paris Charter on AI and Journalism has outlined ten core principles that can guide Ghanaian media outlets in their interaction with AI. These include; (a) journalism ethics guide the way media outlets and journalists use technology, (b) media outlets prioritise human agency, (c) AI systems used in journalism undergo prior, independent evaluation, (d) media outlets are always accountable for the content they publish, and (e) media outlets maintain transparency in their use of AI systems, among others. Again, it is suggested that developers of AI tools would need to credit sources of their information, compensate Ghanaian authors of content they use in training their tools, and respect holders’ intellectual property rights.

To effectively participate in this age of AI, Ghanaian media outlets need to invest in artificial intelligence by hiring experts in data science and machine learning, enter into partnerships with tech companies that have developed AI products, build in-house AI products for use in their newsrooms subject to the availability of resources, and train their journalists on the operation of AI platforms developed by third-parties. We need to remember that a functioning democracy requires an informed public, and Ghanaian journalists have a duty to help their audience to participate fully in public life without fear or favour. Quality journalism will always be the tool to achieve this objective in the country.

I have no doubts in my mind that AI will enhance the capabilities of Ghanaian journalists, save them time attending to serious and complex topics, improve their overall efficiency, and increase the mass media industry’s productivity in the country.

However, Ghanaian journalists, media outlets, programmers, and policymakers must play an active role in the governance of AI systems to ensure ethical compliance, respect for copyrights, payment of compensation to authors for their works used in training AI models, rent extraction by platform companies, and the promotion of the public good.

Published inArtificial IntelligenceEthical Artificial IntelligenceFact-checkingLawLaw and TechnologyLegal TechnologyLicensing and Regulation

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

×