Like a double-edged sword, AI can be wielded for good or evil. Unfortunately, we currently lack adequate regulations to ensure ethical AI use.
By Naini Lankas
The rise of Artificial Intelligence (AI) has been a game-changer in recent years, bringing both incredible advancements and potential dangers. While AI holds tremendous promise, it also threatens our fundamental human rights. As governments and large corporations increasingly adopt AI, How can we regulate its use to prevent abuses and safeguard our rights?
Like a double-edged sword, AI can be wielded for good or evil. Unfortunately, we currently lack adequate regulations to ensure ethical AI use. This has allowed countries like China to employ AI in conducting mass surveillance on their citizens. Such actions blatantly trample upon people’s rights to privacy and freedom of expression, violating their human rights.
In fragile democracies, the rise of deep fakes has become a weapon for spreading misinformation and discrediting political opponents during election seasons. A glaring example of this was witnessed in Burkina Faso, where deep fake videos surfaced, falsely portraying Pan-Africanists as supporters of a military junta. This manipulative manipulation of public perception severely threatens democracy and the sacred right to fair elections.
In developed democracies, growing apprehensions surround the impact of artificial intelligence on electoral processes. As the US’s 2024 campaigns and elections approach, concerns loom large over the potentially troubling consequences. Generative AI technologies can swiftly generate tailored campaign emails, texts, and videos. Moreover, there is the alarming possibility of leveraging this technology to deceive voters, impersonate candidates, and undermine the very foundations of elections, surpassing any previous scale and speed of manipulation. Recently, a wave of AI-generated images depicting former US President Trump’s arrest circulated online, coinciding with his scheduled court appearance.
In recognition of these concerning trends, Amnesty International launched its global Ban the Scan campaign in January 2021. This initiative aims to end the use of facial recognition systems, which represent a form of mass surveillance amplifying racist policing and endangering the right to protest. The campaign emphasised that algorithmic technologies, including facial recognition scanners, violate privacy rights and encroach upon peaceful assembly and expression freedoms.
The abuses of AI extend even further, encompassing the abuse of the right to privacy guaranteed by the International Covenant on Civil and Political Rights. Certain governments employ social media monitoring programs to gather citizens’ data, which is then analysed by AI-powered systems to identify perceived threats. This unwarranted collection and analysis of vast amounts of private data constitute a flagrant violation of our privacy rights.
Additionally, AI raises complex issues regarding intellectual property rights, which are enshrined in various international declarations and covenants. Questions arise about who owns AI-generated works or inventions, whether AI’s inventions should be considered prior art, and who owns the datasets used for AI learning. Furthermore, determining liability becomes a pressing matter when AI-generated creativity and innovation infringe upon others’ rights or legal provisions.
Given the evident need to curb AI abuses, comprehensive regulation is necessary. Private companies utilising AI must establish internal mechanisms for accountability, while governments should conduct thorough assessments of AI systems to identify potential human rights risks during the development phase. Simultaneously, developing clear regulations is necessary to guide the ethical use of AI.
Legal solutions can offer avenues to address these issues. For instance, in the UK, computer-generated works such as literary, dramatic, musical, or artistic creations are protected by law. However, the matter of patentability of computer-generated works lacks explicit legal provisions. Ownership of rights rests with the creator of the AI design unless the work was commissioned or created within the scope of employment, in which case the rights belong to the employer or the commissioning party.
A study by the EU Parliament STOA in 2019 proposed several policy options to govern algorithmic transparency and accountability. These options encompassed raising awareness through education, establishing watchdogs and whistleblower protections, ensuring accountability in the public sector’s use of algorithmic decision-making, implementing regulatory oversight and legal liability, and coordinating globally for algorithmic governance. Specific measures suggested to promote algorithmic transparency included algorithmic impact assessments, algorithmic transparency standards, and explanatory models to shed light on algorithmic decision-making processes.
As Kenya embraces the advancements of AI, it becomes paramount to ensure its use aligns with the protection of human rights. The government, through the office of the Data Protection Officer, must take the lead in regulating AI and holding accountable those who utilise AI, be it companies or individuals. A proactive approach to regulation is essential in safeguarding the rights and welfare of the population.
Furthermore, raising public awareness is crucial in addressing the potential risks associated with AI. Educating citizens about the capabilities and limitations of AI technology can empower them to make informed decisions and actively participate in shaping its use. Through collective knowledge and engagement, we can effectively mitigate the negative impacts of AI on our society.
In this rapidly evolving landscape, we must also acknowledge the need for ongoing dialogue and collaboration among stakeholders. Governments, technology experts, human rights organisations, and the public must come together to forge a comprehensive framework that balances innovation with protecting our fundamental rights. We can create regulations that reflect diverse perspectives and ensure accountability by fostering an inclusive and participatory approach.
The journey to regulating AI is not without its challenges. The complexity of the technology, coupled with its rapid advancement, demands adaptable and forward-thinking solutions. We must anticipate potential pitfalls, continuously evaluate the impact of AI on human rights, and iterate our regulatory frameworks accordingly.
Ultimately, the regulation of AI is not a hindrance to progress but rather a necessary safeguard for our collective well-being. By establishing clear guidelines, we can harness the potential of AI while preventing its misuse and protecting our rights. The path ahead requires boldness, collaboration, and a steadfast commitment to upholding human rights in the face of technological advancements.
As we navigate this uncharted territory, let us remember that the power of AI lies in our hands. With responsible and ethical regulation, we can harness its transformative capabilities and create a future where AI truly becomes a force for good, empowering and enhancing the lives of all. (
– Ms Naini Lankas is an educator, an AI enthusiast, and a law student.