Expert Speak Digital Frontiers
Published on Nov 06, 2025

Kenya’s 2022 elections showcased how artificial intelligence can be harnessed to monitor digital spaces, counter disinformation, and uphold the integrity of democratic processes.

AI and Electoral Integrity: Insights from Kenya’s 2022 Elections

Image Source: Getty

The 2022 Kenyan general elections marked a new phase in electoral dynamics. It was the first recorded instance in which artificial intelligence (AI) tools were deployed at a national level in Africa to safeguard against targeted polarisation during electoral processes. The national initiative, led by the Maintaining Peace through Early Warning, Monitoring and Analysis Consortium (MAPEMA), introduced the global community to the application of AI technologies in safeguarding electoral integrity and ethically monitoring public sentiment throughout the election period. This was done by ensuring healthy communication with voters and protecting them against targeted polarisation through social media platforms.

The combination of state authority, international resources, and the ethical standards of non-governmental entities helped to ensure balanced and beneficial outcomes.

The planning and implementation of the system employed in Kenya represented a significant advancement in the integration of AI within political frameworks. Nonetheless, the experience also highlighted critical areas requiring further analysis to ensure the security and integrity of elections as AI tools become increasingly common in the political landscape. This article examines the fundamental tenets demonstrated by the Kenyan initiative that merit replication in future undertakings in other electoral processes. It also analyses its relevance for India.

The Need for a Competent Overseeing Body

The MAPEMA consortium was a collaborative initiative in Kenya that addressed hate speech, disinformation, and online polarising content during electoral periods through advanced technological tools for monitoring, analysis, and rapid response.

Established ahead of the 2022 general elections, it played a key role in reinforcing early warning systems and promoting peacebuilding by integrating data-driven methods with community-led interventions. Comprising Code for Africa, Shujaaz Inc., and Aifluence — non-governmental organisations recognised for their expertise in digital technologies, youth engagement, and influencer networks — the consortium collaborated closely with national institutions such as the National Cohesion and Integration Commission (NCIC) and the UWIANO Platform for Peace, alongside United Nations agencies including the UN Development Programme (UNDP) and the Office of the United Nations High Commissioner for Human Rights (OHCHR). This broad partnership enabled systematic monitoring, reporting, and verification of online trends, while supporting both state and non-state actors in implementing proactive measures for peaceful elections.

MAPEMA’s structure exemplified a functional governance framework for integrating AI into electoral processes. The combination of state authority, international resources, and the ethical standards of non-governmental entities helped to ensure balanced and beneficial outcomes. By contrast, state-only bodies could have potentially risked political influence, while international or private-led entities without state oversight would have lacked local legitimacy and been susceptible to economic biases. Such imbalances also posed potential national security risks, given the sensitivity of electoral data collection and analysis.

Operationalising AI Tools During Elections

Kenya exhibits a notably high rate of internet penetration, with approximately 85 percent of its population having access to online services, driven primarily by engagement on social media platforms. This environment provided a favourable context for the collection and analysis of diverse data during election periods, offering valuable insights into public sentiment. 

The MAPEMA Consortium employed specialised AI models designed to scrape, store, and analyse large volumes of social media data to counter the spread of toxic content and manipulation within digital political spaces. These systems facilitated the early detection of online hate speech and the identification of misinformation trends among voter demographics. For example, the consortium’s AI tools detected and flagged over 800 cases of hate speech on Facebook, which were then shared with social media platforms for removal, helping to prevent the escalation of online incitement that seeks political polarisation and the spread of disinformation during the election period.

The Kenyan initiative aimed to promote targeted peace messages by enhancing voter education and deploying chatbots capable of engaging voters with accurate and relevant information. The project’s broader efforts concentrated on identifying, analysing, and addressing emerging trends in online political discourse, culminating in the publication of “Youth Pulse” articles that presented key analytical findings.

Although the data collected through this initiative is largely open access, as much of it originates from social media platforms, the analytical findings and trained models derived from it can also be used to potentially influence the electoral system. For instance, the political inferences gathered from this exercise can be misused for voter profiling. Moreover, identifying demographic groups seen as needing pacification through targeted messaging raises further ethical and security risks, as malicious actors could exploit these supposedly ‘volatile’ communities to influence electoral behaviour.

Updating Legislation

A critical function that AI can perform within online political spaces is mitigating the dissemination of misleading or fabricated media circulated by political actors. The advent of Generative AI has made it increasingly easy to produce highly convincing fake content designed to manipulate voter sentiment. Employing AI models to detect and counter such material represents one of the most effective approaches to safeguarding the integrity of electoral campaigns. 

The constitutional amendment of 2011 in Kenya, which facilitated and supported the integration of technology within the electoral process, has established a valuable precedent for updating legislation to ensure the responsible and transparent application of AI in democratic contexts.

The constitutional amendment of 2011 in Kenya, which facilitated and supported the integration of technology within the electoral process, has established a valuable precedent for updating legislation to ensure the responsible and transparent application of AI in democratic contexts. The Kenya Integrated Electoral Management System (KIEMS), developed by the Independent Electoral and Boundaries Commission (IEBC), exemplified the potential of combining technological and manual processes to improve the efficiency and accountability of election management. It is this pre-existing legislative foundation that facilitated the implementation of the initiative against hate speech and political manipulation in online spaces. 

The effective integration of AI into electoral processes hinges on the continual evolution of legislative frameworks that balance innovation with accountability. Updating existing laws to address the ethical, procedural, and security implications of AI is essential to preserving electoral integrity. Such reforms not only mitigate risks of technological misuse but also reinforce transparency and public confidence, ensuring that democratic institutions remain resilient in the face of rapid digital transformation.

Addressing the Challenge of Mistrust of Technology

Following the 2022 elections, the technology employed by the IEBC under the leadership of the MAPEMA consortium was formally challenged before the Kenyan Supreme Court. It was alleged that it failed to meet the constitutional and statutory standards prescribed, that the IEBC had not fulfilled its constitutional obligations by delegating the design and implementation of the KIEMS to a foreign entity, and that the Commission resisted efforts to ensure the transparency and accountability of this entity’s operations. These allegations reflected the broader public mistrust surrounding the integration of AI into electoral processes. 

However, the Kenyan Supreme Court did not substantially engage with these arguments, citing insufficient evidentiary standards in the proof presented. In its defence, the IEBC maintained that the data remained secure and was accessible only to authorised personnel, with a clear audit trail maintained for all activities related to the information, which was critical in substantiating this claim. This post-election case established a significant legal precedent in validating the use of AI models to monitor electoral procedures.

The concerns about the MAPEMA consortium’s technology are also indicative of a broader issue. The growing mistrust of AI among the public is a global phenomenon, with only 46 percent of people worldwide expressing confidence in these systems. Its incorporation into electoral processes makes it essential to address this issue, as failing to do so could result in a substantial erosion of trust in democratic systems. A crucial initial measure in addressing such distrust lies in ensuring both algorithmic and operational transparency. Although AI fundamentally functions as a black box, publishing the specific algorithms employed for data collection and analysis can help reduce the prevailing uncertainty surrounding its operations.

In the Indian socio-political environment, one key advantage is the availability of existing work on AI models trained using Indian social media data, which would reduce implementation costs and enhance the accuracy of analysis and prediction.

As demonstrated by the case in Kenya, clear communication and accountability regarding the design, deployment, and oversight of electoral technologies can build confidence in their utilisation in democratic processes. Transparent disclosure of algorithms, data-handling procedures, and system governance structures is therefore vital to safeguarding electoral legitimacy. By ensuring that technical processes are both auditable and comprehensible to relevant stakeholders, institutions can foster informed trust and mitigate perceptions of opacity that often fuel public scepticism towards AI-driven systems.

Relevance for India

Replicating the Kenyan programme in another national context introduces distinct advantages and challenges. 

In the Indian socio-political environment, one key advantage is the availability of existing work on AI models trained using Indian social media data, which would reduce implementation costs and enhance the accuracy of analysis and prediction. Furthermore, such an initiative has the potential to increase voter turnout among individuals aged 25–34, who comprise approximately 65 percent of India’s internet-using population.

However, the country’s vast demographic diversity and uneven distribution of internet access, both across regions and age groups, pose significant limitations, as findings derived from social media data may not be generalisable to the wider electorate. Extending an AI-driven social media initiative across the entire electoral landscape risks excluding a substantial proportion of voters. Consequently, it is essential to conduct preliminary research to define the feasible scope of such an undertaking and to explore strategies to bridge the participation and access gaps that remain.


Pranoy Jainendran is a Research Assistant with the Centre for Security, Strategy and Technology at the Observer Research Foundation.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Pranoy Jainendran

Pranoy Jainendran

Pranoy Jainendran is a Research Assistant with ORF’s Centre for Security, Strategy & Technology. His work examines how technology shapes State institutions, national and international affairs, ...

Read More +