Have you listened to a song rendered by Artificial Intelligence (AI) but derived from the sound, beat and philosophy of musical greats like Ray Charles, Otis Redding, Sam Cooke and Marvin Gaye, to name a few?
So good are some of the renditions that it is difficult for the untrained ear to decipher that they are all made using AI. What is more, they sound way better than the real thing.
Think about it. AI has become a major transformative force, especially in the creative industry. It is now possible, say, to write an article like this in five minutes — or less — just by giving ChatGPT the right prompts. And there are many in the news industry who are doing this undetected.
However, it is always recommended that one acknowledges instances in which they have used AI, first, because it is the right thing to do, and second to warn readers that there could be errors arising from some of the content since AI does not always provide the context.
Technology is here to stay. Some may view it as a threat, especially in professions such as accounting, interior and graphic design, where AI has made work faster, neater and has output that is superior compared to what workers do.
However, no matter how good such technology becomes, it will always require human intervention and the challenge for those who face the threat of being made redundant is to learn how they can plug in and turn AI into a tool rather than a threat.
In Kenya, editors are already exploring the advantages of working with AI tools while also grappling with the challenges these present. In Chinese newsrooms, news is often presented by AI newscasters, some of whom have become more popular than their human counterparts.
In Zimbabwe, one newsroom is conducting a similar experiment with the main challenge being how to make the artificial presenter sound authentically Zimbabwean. These are developments that Kenyan editors are following closely and will be the subject of the upcoming annual convention, which is exploring the question of how to identify and position truth in an age where AI can create disinformation, misinformation and malinformation.
A case in point is the recent incident involving Kibra MP Peter Orero and his driver, who were filmed by CNN correspondent Larry Madowo while driving on the wrong side of the road.
Within hours of Larry posting the video, Kenyans on social media had developed hundreds of images and videos aided by AI and caricaturing the legislator and his driver. The matter escalated quickly, leading to the arrest and prosecution of the driver, who was fined Sh100,000 for the offence.
This is one case of how AI has been deployed to achieve a public good. But what about instances where leaders are depicted dancing or addressing press conferences when such incidents did not take place? This is a big challenge because it raises the question of trust. Does AI increase or erode trust in news? If it does, what can be done about it? This is one of the big questions of our time especially as we head towards elections where it is possible to create memes mimicking leaders. Gullible audiences are likely to fall for such memes since they play to their fears, conscious and unconscious biases.
In other industries, particularly those that handle big data, AI is a welcome tool that has improved efficiency and accuracy that then support decision-making on a diverse range of issues, from whether to lend money to a bank client to what product to push through the devices of shoppers.
However, an inherent danger in this trend revolves around the safety and security of personal data since AI is susceptible to third party breaches as has been noted in the US. This has major implications for the rights and freedoms of individuals, especially the right to privacy and there is need to develop both rules and regulations to govern how organisations handle data while using AI, and also laws to sanction offenders.
These present challenges for public institutions such as the Office of the Data Protection Commissioner. By September, the commission had received 8,251 complaints against data breaches and although it resolved a staggering 7,673 and issued 178 compensation orders, the work ahead of it continues to be odious.
How then can organisations build trust in their systems in the age of AI given the potency of the threats posed by the new technology? This question has no easy answers. However, organisations will need to be more agile in responding to threats, more innovative in coming up with solutions to problems even before they arise and investing more in solutions that will increase customer satisfaction while ensuring efficiency. In short, there is no running away from AI. Not in the near future.
mbugua@nairobilawmonthly.com

