Artificial Intelligence (AI) is an umbrella term for computer software that mimics human cognition in order to perform complex tasks and learn from them. Machine learning (ML) is a subfield of AI that uses algorithms trained on data to produce adaptable models that can perform a variety of complex tasks. An AI tool is a software application that uses artificial intelligence algorithms to perform specific tasks and solve problems. AI tools can be used in a variety of industries, from healthcare and finance to marketing and education, to automate tasks, analyse data, and improve decision-making.
Most of us have been engaging in the artificial intelligence race and the fear of lagging behind is real. While everyone seems to agree that we all must jump onto the AI bandwagon to be up to date with technology, no one seems to know where this train is going. But are we not forgetting to ask ourselves the crucial questions of what type of AI do we really want for our society? What impact will this technology have or is already having on people’s rights, on people’s lives? Are there social areas where the decision is too important or sensitive to leave to a machine at all? After all, there might not be a single race for AI but multiple ones, going in opposite directions. That is why one of the biggest issues is a general failure to understand the real effect of disinformation in the AI era. Any initiative or effort aimed at combating disinformation should be evidence-based policy solutions, using clear empirical data of actual harms that are of a scale that merits intervention.
On the sideline of a Masterclass for journalists and CSOs in Cameroon, #defyhatenow engaged a conversation on how AI impacts the information production and distribution process in the digital age. Two lead trainers from Africa Check Senegal, Valdez Onanina and Azil Momar shed more light during a discussion with our Africa Factchecking Fellowship – #AFFCameroon fellows based in Yaounde.
The session was moderated by Donald Tchiengue, Digital Projects Coordinator, who launched discussions on how to find fact-checking articles on climate change, get the right information and to understand the opportunities and threats of AI.
Fact checking is based on factual information, and you always have to be sure that every piece of information is true. Conducting an investigation, which will take longer, is generally the best option. Focus was made on a few angles as they detailed why some information can be verifiable and why others can not and understanding the extent at which to go to fact check an information, after which recommendations were equally made on how to live with the advent of artificial intelligence as shared in the points highlighted; Is AI an opportunity or a threat? AI can represent a danger depending on who is using it, it all depends on what we are looking for, AI can provide guidance on the information we are looking for, AI can help us verify content on social networks. Which subjects are important to fact-check and which are not? Criteria for information to be fact-checked: Public interest, Virality, Public demand, Is it verifiable? The trend about the subject matter. What are the limits of artificial intelligence ? Chat GPT does not always bring out true and accurate information, AI will never replace human memory.
#defyhatenow at an age of AI: the Mix of Artificial Intelligence and Mis/disinformation/Hate Speech
The use of AI for hate speech regulation directly impacts freedom of expression, which raises concerns about the rule of law and in particular, notions of legality, legitimacy and proportionality. Relying on AI, even without human supervision, is a necessity when it comes to content that could never be ethically or legally justifiable, such as child abuse. However, the issue becomes complicated when it comes to contested areas of speech, such as hate speech, for which there is no universal ethical and legal positioning as to what it is and when (if at all) it should be removed.
Although technologies such as natural language processing and sentiment analysis have been developed to detect harmful text without having to rely on specific words or phrases, research has shown that they are still far from being able to grasp context or to detect the intent or motivation of the speaker. This reflects the significance of contextualising speech, something that does not align well with the design and enforcement of automated mechanisms and that could pose risks to the online participation of minority groups. Moreover, automated mechanisms fundamentally lack the ability to comprehend the nuance and context of language and human communication.
The rule that AI must always remain under human control and states should offer effective access to remedy for victims of human rights violations arising from the way AI functions. It also refers to the promotion of AI literacy. In relation to the latter, there is space for offering human rights training and capacity-building to those who are directly or indirectly involved in the application of AI systems.
Some Cameroonian civil society organisations working to promote a healthy internet and spaces of public freedoms, gathered in Yaoundé on June 09, 2023, made the following observations: Artificial Intelligence (Al) is a next-level technology that is developing rapidly and has the potential to positively revolutionise many aspects of our lives. However, the development of Al also raises a number of ethical and existential questions that need to be addressed urgently in the world but also in Cameroon. To get more insight on the declaration they made at this meeting, read here for more
It is imperative to adapt and accept that AI is becoming a new normal in our society and will eventually be ever more present in our day to day lives, making it become a constant instinct for everyone to always be observant, to think critically, and asking the right questions is the best thing for at least a minimum level of safety and security, learning and updating to the newest trends about AI, understand how it works inorder to with ease be able to tackle the challenges. #defyhatenow shall through some training session organise a training session for mainstream media leaders to explain these concepts, developing activities within the organisations of people affected or challenged by hate speech and disinformation will allow for more activists to fight this scourge. Fact-checking plays a crucial role in verifying facts and validating results produced by AI’s algorithms. By promoting a culture of fact-checking, we can reinforce the trust and credibility of this technology.