The AU Continental AI Strategy: Concrete Safety Proposals or High-tech Hype?

I. Introduction

As AI continues to exhibit more advanced capabilities and its use grows, several countries and regions have been crafting agendas and strategies on how to regulate, develop and integrate the technology. In line with these trends, the African Union (AU) published its Continental Artificial Intelligence Strategy, which I will refer to as ‘the AU AI Strategy’ in this piece. The AU AI Strategy is designed to serve as a master plan that distils the continent’s collective goals on AI.

AI presents major risks that become more worrisome as its capabilities grow. Yet those in control seem to be investing staggering resources to enhance its power and not its safety. In this piece, I assess how the AU AI Strategy addresses and fails to address AI safety concerns. The analysis flows in the following manner. To begin with, I will give a general overview of the AU AI Strategy. I then clarify what I mean when I say “AI safety concerns”. From there, I assess whether the AU AI Strategy addresses these concerns sufficiently. I close the piece by recommending a way forward.  My overall position is that although the AU AI Strategy addresses some AI safety concerns, it demonstrates limited foresight, as it falls short of addressing some serious safety issues that could emerge as AI becomes more capable.

II. General Overview of the AU AI Strategy

The AU AI Strategy understands AI to be a force that could help rescue the continent from a lot of its problems. It positions AI as having great capability to significantly impact the attainment of the AU Agenda 2063 targets and the Sustainable Development Goals (SDGs). In this same spirit, the AU AI Strategy cites a PWC study, which states that AI could contribute up to US$ 15.7 trillion to the world economy by 2030. However, there is a divide in AI development between the continent and other regions of the world. The authors of the document show a growing apprehension that this could get in the way of Africa leveraging this opportunity.

Given its importance, there should have been more outreach and transparency in the consultations that took place during the development of the AU AI Strategy. Compared to equivalent frameworks in other regions of the world, the working group on AI could have sought more wide-ranging feedback from a wider group of stakeholders. Additionally, the consultation period, which was only six days, was too short to allow wide and detailed responses. Finally, for the sake of transparency, the contributions given during the consultative period should have been publicly released so that readers can assess if the AU Commission took stakeholder opinions seriously.

As work on the AU AI Strategy was ongoing, the African Union Development Agency – New Partnership for Africa’s Development (AUDA-NEPAD) was working on developing the AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 (which I will refer to as ‘the AUDA-NEPAD White Paper’) and AUDA-NEPAD Artificial Intelligence Roadmap for Africa: Contributing Towards a Continental  AU Strategy on AI (which I will refer to as ‘the AUDA-NEPAD Roadmap). Both documents were prepared with the guidance of the AU High-Level Panel on Emerging Technologies (APET), an advisory board within AUDA-NEPAD comprised of 10 expert panellists appointed by the AU Commission Chairperson. 

I will reference the AUDA-NEPAD White Paper and the AUDA-NEPAD Roadmap across this piece. In my view, they could give us an idea of what AU experts think about pertinent AI issues, because of the role that AUDA-NEPAD and APET play in the AU organisational structure. AUDA-NEPAD is the AU agency that provides technical and implementation support to AU Member States for priority projects. APET, on the other hand, advises the AU Commission and AU Member States on harnessing emerging technologies for economic development. In any event, the AU AI Strategy acknowledges the contribution of AUDA-NEPAD personnel in its development. Therefore, these documents could have a significant bearing on how AU Member States decide to implement the AU AI Strategy or approach AI Governance in general.

General content of the AU AI Strategy

The AU AI Strategy is organised into three broad sections. In its introduction, it defines AI. It then goes on to provide a situational analysis of AI development in Africa, the regional policy context for AI in Africa and the drivers, risks, enablers and inhibitors of AI uptake in Africa.

The second part of the AU AI Strategy carries the most important details. Right from the start, there is an ambition to make sure AI development centres African people, is carried out responsibly and places the continent at par with other global players. In line with this vision, the mission places emphasis on accelerating AI capabilities development for the sake of socio-economic progress while minimising the risks that AI poses to African people. It also drops hints of some of the principles it is based on by stating that all these efforts should align with the AU Agenda 2063 and SDGs. 

In the same section, the AU AI Strategy lays out its six main focus areas:  (1) Maximising the benefits of AI for social and economic development and cultural renaissance, (2) Minimising risks and safeguarding AI development and adoption from harm to African people, societies and environments, (3) Building capabilities in infrastructure, datasets, computing, skills and education, research and innovation, and specialised AI platforms, (4) Fostering regional and international cooperation, (5) Accelerating AI investment and (6) Creating an inclusive governance and regulatory framework. It discusses each focus area in depth, providing high-level recommendations and action points. It also presents fifteen strategic objectives that align with the document’s focus areas.

The second part of the AU AI Strategy discusses the principles that guided its writing, which are: AI should address local African needs first; AI should be people-centred; AI should uphold human rights and dignity; AI should advance peace and security; AI production, development and use should be inclusive and respect the diversity of African people; AI should be ethical and transparent; Governance approaches to AI should foster regional cooperation and integration, and finally, AI solutions should be supported by formal and informal education.

The final section outlines an implementation roadmap and recommendations on capacity building. The AU and AU Member States should aim to achieve the fifteen strategic objectives within a five-year period from 2025 to 2030. This effort is divided into two main phases. In the first phase (2025 to 2026), the focus is on establishing governance frameworks, developing strategic documents, mobilising resources and building capacities, including creating an AI advisory board and centres of excellence. In 2027, a review of phase I will be conducted. The concluding phase (2028 to 2030) will then focus on implementing core projects and actions of the AU AI Strategy.

How consequential are AU strategies of this kind?

AU Strategies do not compel AU member states to create and enforce any laws or policies. However, they often exert significant influence over how these countries design and implement laws and policies. Various national strategies either borrow from AU strategies or aim to align with them. Consider the AU Agenda 2063, the continent’s blueprint for development. Several national development plans highlight that their proposals align with or aim to achieve the agenda’s objectives. For instance, the Ghana Vision 2057: Long Term National Development Perspective Framework states that the AU Agenda 2063 guides it. Similarly, the Egyptian Vision 2030 National Agenda for Sustainable Development is explicitly written in line with the AU Agenda 2063. Even more specific strategies have an influence on national strategy. The Kenya National Digital Master Plan 2022-2032, for example, mirrors the AU Digital Transformation Strategy (2020 – 2030) at national level. Therefore, even though AU strategies do not have legal force, they seem to trickle down and influence national policy.

III. What exactly are AI safety concerns?

This piece focuses on risks posed by advanced AI, such as artificial general intelligence, that could lead to extinction, global catastrophe or societal-scale suffering. Despite facing significant pushback, many leading AI experts believe these risks could actually materialise. In June 2023, AI experts and other concerned professionals signed a statement by the Center for AI Safety urging the world to prioritise existential AI risk mitigation alongside threats like pandemics and nuclear war. Earlier in the same year, AI experts and advocates signed an open letter by the Future of Life Institute calling for a six-month pause at least on developing systems more powerful than GPT-4.

One reason why many believe these risks are plausible is because AI is developing rapidly. Consider the following examples. AI is documented to have surpassed humans in a range of tasks, something that is happening more and more. A 2023 Microsoft Research paper analysed GPT-4’s expertise across various subjects and task performance and concluded that the model exhibited “sparks of artificial general intelligence”. Very recently, OpenAI released o1, a model designed to spend more time reasoning, which completes complex tasks faster than humans. OpenAI’s evaluations show that o1 rivals human experts in numerous complex reasoning benchmarks. In Mathematics, it ranked in the top 500 students in the USA Math Olympiad qualifier, while in Chemistry, Physics, and Biology, it outperformed PhD-level experts.

If mismanaged, the development and use of rapidly advancing AI could have significant repercussions for communities across the world. This piece will explore how the AU AI Strategy addresses the following challenges posed by AI: Information security, engineered pandemics, lethal autonomous weapons, cyberattacks, chemical, biological, radiological and nuclear threats, as well as the risks posed by open-source AI. The following section will discuss whether the AU AI Strategy adequately addresses these concerns by examining whether the AU has a certain outlook on AI capabilities that could influence its approach to AI safety issues and whether the AU AI Strategy acknowledges important AI safety concerns and proposes appropriate measures to mitigate these concerns.

IV. Has the AU AI Strategy sufficiently addressed AI safety concerns?

General view of AI capabilities

Countries that are home to advanced AI development have acknowledged the great harm that highly capable AI could cause. Last year, former UK Prime Minister Rishi Sunak directed that his government’s approach to AI be urgently redirected to address existential risk. This year, the California State Assembly passed the Safe and Secure Innovation for Frontier AI Models Act (SB 1047). Among other things, the legislation sets out safety requirements for developers of advanced AI models that could cause ‘critical harm’. The document defines ‘critical harm’ as several things, including mass casualties resulting from the creation of a chemical, biological, radiological or nuclear weapon, cyberattacks on critical infrastructure or an AI model that acts with limited human oversight, intervention or supervision.

Furthermore, a report prepared by the UN High-level Advisory Body (HLAB) on AI indicated that relative to experts from other regions, African experts had the highest level of concern about unintended multi-agent interactions among AI systems and the second highest level of concern regarding unintended autonomous actions by AI systems (excluding autonomous weapons). Ironically, the understanding of AI capabilities endorsed in the AU AI Strategy seems to be well behind that which African experts polled by the UN HLAB have. Although contested among AU Member States, the AU AI Strategy defines AI as,

…computer systems that can simulate the processes of natural intelligence exhibited by humans where machines use technologies that enable them to learn and adapt, sense and interact, predict and recommend reason and plan, optimise procedures and parameters, operate autonomously, be creative and extract knowledge from large amounts of data to make decisions and recommendations for the purpose of achieving a set of objectives identified by humans.

By defining AI as a computer system that pursues objectives identified by humans, the AU AI Strategy fails to take into consideration certain forms of very highly capable AI that could have their own goals. The AU AI Strategy’s vision of AI is not a surprise. It fits fairly well with the one shown in the AUDA-NEPAD White Paper, which, in its initial parts, claims to dispel some ‘myths’ about AI. Where it discusses how ‘independent’ AI truly is, the document argues that AI cannot make independent decisions as its cognitive abilities are currently not very advanced. Because of this, AI cannot operate entirely on its own since it requires training and human interventions to fully operate. Further, it addresses the ‘misconception’ that AI will eventually enslave humankind and destroy the world. It posits that even with the most advanced algorithms, AI is incapable of thinking like humans and is unlikely to learn to do so. In fact, it suggests that AI will instead mostly assist humankind in varied socio-economic fields.

The AU AI Strategy demonstrates a limited view of AI. In the same vein, the AUDA-NEPAD White Paper positions these myths as sensationalised threats or exaggerated capabilities. Considering the current state of the art of AI and expert predictions on what it could become, these views are outdated and could lead to AU member states failing to adequately grasp the challenge that they face.

Responses to specific AI safety concerns
Information security

Information security focuses on protecting information and information systems from unauthorised access, use, disclosure, disruption, modification or destruction. A major concern in this field is how AI is used in social engineering via disinformation and misinformation campaigns. A recent study found that GPT-3 could produce text nearly as persuasive for US audiences as news articles written by foreign propagandists. Moreover, when techniques like prompt editing and curating outputs were applied, GPT-3’s articles were just as convincing, if not more so, than the news articles. Some researchers suggest that advanced machine-learning (ML) techniques could enhance disinformation campaigns in the future. These techniques could be used to make tools for social listening, scale content creation, exploit synthetic media, enhance and deploy online bots, impersonate experts and influential people and manipulate recommendation algorithms.

The AU AI Strategy has something to say about information security. It concedes that AI-generated misinformation and disinformation seriously threaten societal cohesion, African values and democracy. It recommends mass media and information literacy training across Africa, inspired by this UNESCO Policy Brief, to teach citizens how to decipher false information. In line with this, it suggests speeding up the completion and implementation of the AU Regional Media and Information Literacy Framework. Although this is a step in the right direction, this approach could have limits.

While media literacy can help people identify false narratives, it is resource-intensive and difficult to scale. Therefore, the AU AI Strategy should have suggested additional methods to tackle the problem and outlined ways for African countries to enhance their capacity for these interventions. This report suggests that countries can build resilience against disinformation and misinformation by investing in local journalism, encouraging independent journalism, implementing counter-messaging strategies, and fact-checking and labelling social media content. The Center for Security and Emerging Technology has also argued that there should be research on understanding how AI-driven campaigns develop and spread to come up with early detection systems. The AU AI Strategy could have proposed technical measures that AU Member States can invest in, for instance, “public interest algorithms” that carry out tasks like automatic hoax detection. The AU AI Strategy could have also set out how African countries can prevent threat actors from accessing user data.

Engineered pandemics

Large language models have been instrumental in providing insights across various domains, including biosecurity. However, they also pose risks because they publicise sensitive information – for example, information about how to engineer diseases. In wrong or naive hands, this kind of information may lead to creating or modifying pathogens and toxins that could harm society. Even without AI, there is research showing that novices and experts could produce work that creates biosecurity risks. AI simply increases the likelihood of this happening. Experts are anxious that future AI systems can help to engineer new pathogens capable of causing pandemics. Such a situation could escalate beyond the impact of a naturally occurring pandemic, for instance, if multiple pandemic-capable agents are intentionally released in multiple travel hubs. Consequently, some scientists are pushing for governments to actively regulate advanced models and impose safety measures on developers.

Despite the AU AI Strategy identifying health as a priority sector for AI integration, it seems mostly oblivious to this serious issue. It only discusses how helpful AI was during the COVID-19 crisis and encourages AU Member States to foster regional and international cooperation to address pandemics using AI. The AU AI Strategy does not consider the flipside: AI could accelerate the likelihood of pandemics.  The authors of the AU AI Strategy should have been especially concerned about this issue because the continent is currently ill-equipped to handle pandemics and epidemics due to its weak healthcare systems. Healthcare systems in African countries face serious structural and operational challenges, such as shortages of healthcare professionals and facility supplies.

The AU AI Strategy missed an opportunity to outline measures African countries could take to prevent AI-driven pandemics. One approach could include setting AI procurement guidelines that mandate companies selling AI models trained on biological data to report critical information. Similar to the White House Executive Order, these companies would need to disclose model details such as red team evaluations and cybersecurity protections. Additionally, AU Member States should establish Biosafety and Biosecurity standards for Biolabs that integrate AI into their procedures, given the continent’s weak Biolab regulation. In this report, African countries with level 4 biosafety labs scored poorly in biosecurity governance (between 1 and 4 out of 18) and in governing dual-use research (between 0 and 1 out of 10). Another strategy could involve suggesting ways to regulate DNA synthesis access.

The AU AI Strategy could also have proposed the steps that African countries can take to prepare for any AI-engineered pandemic. AU Member States could invest in surveillance and early detection systems and pandemic preparedness supplies such as protective equipment, medical countermeasures and vaccines, among other things. For example, this paper discusses how sector and supply-chain analyses can be done to detect which type of workers are most in need of protective equipment. Considering their limited resources, this could be useful for African countries. All these efforts can be coupled with finding ways to counter misinformation and disinformation campaigns, which can hinder pandemic response efforts.

Lethal autonomous weapons

Lethal autonomous weapons (LAWS) are already a reality. While they do not necessarily need AI to function, AI could enhance their capabilities. An open letter by the Future of Life Institute explained that because the raw materials required to make such weapons are not very costly and hard to obtain, they are likely to be mass-produced. Further, because of the technical problems plaguing modern ML, such as interpretability, LAWS could result in fatal mistakes. There is fear that LAWs could not only make wars more severe but also more frequent, as they could replace human soldiers, potentially lowering the threshold for engaging in conflict.

The International Committee of the Red Cross emphasises that African countries should actively participate in discussions about autonomous weapons because some of them produce arms. Furthermore, AI has been changing the dynamics of armed conflict on the continent by introducing, for instance, AI-enabled surveillance and AI-powered drones such as the Kargu-2. The AU AI Strategy briefly notes that the use of autonomous weapons and the broader weaponisation of AI deserve attention. This is because “complex AI systems” might escalate conflicts through incorrect predictions. Given that the AU AI Strategy aims to highlight the AU’s main concerns, it could have devoted more time to this issue.

Although AU law generally supports the ban on weapons that are not subject to human control, African countries are not the demographic most in possession of such weapons. Regardless, the AU AI Strategy could have recommended that African countries ban the use of these weapons in their jurisdictions. Other than that, the AU AI Strategy could have suggested how AU Member States can influence the ban of AI-dependent LAWs beyond Africa. Research by ILINA argues that there are some strategic reasons why Global South countries could have an influence on the path to highly capable misaligned AI. It bases its arguments on how Global South countries have historically played a crucial role in designing and using multilateral rules and institutions in international environmental law and intellectual property law. The upshot is that African countries have had a say in influencing international law and action, and could continue to do so. Further, some Global South countries in history have successfully opposed weapons testing in their countries and influenced the ratification of laws like the African Nuclear-Weapon-Free Zone Treaty. Therefore, African countries can collectively assert more influence on the global discourse surrounding LAWs. The AU AI Strategy could have recommended how exactly African countries can synergise to do this.

Cyberattacks

Cyberattacks have long been a threat and AI could make them more accessible, more frequent and more destructive. AI has enabled cyber attackers to enhance their traditional attack tactics, making their attacks faster, larger in scale and more automated. For example, this study shows that LLM agents can autonomously hack websites without knowing their vulnerability beforehand. Further, the study claims that LLM agent hacks are likely cheaper than the cost of a cybersecurity analyst. In a worst-case scenario, advanced models could simplify attacks on critical infrastructure. Essential services that the masses rely on to survive, such as the electric grid, water supply systems and port facilities, rely on vulnerable systems. The Center for AI Safety explains that because state actors mostly carry out cyberattacks on critical infrastructure, they are not as prominent because states exercise restraint. However, the growing concern is that unaccountable non-state actors could soon carry out such large-scale attacks.

AU Member States have already been victim to numerous critical infrastructure attacks. Acknowledging these cybersecurity threats, the AU AI Strategy expresses the need to adhere to the  AU African Position on the Application of International Law to Use Information Communication Technologies in Cyberspace and the Malabo Convention. It also discusses the need for AU Member States to upgrade their cybersecurity capabilities to impede the risks that AI solutions may pose. It pushes countries to develop toolboxes for analysing, auditing and protecting information systems.

Unfortunately, these interventions may not be foolproof. Only 15 AU countries have ratified the Malabo Convention, limiting its influence on the continent. It will be hard for AU Member States to upgrade their cybersecurity capabilities to mitigate AI-enabled cyberattacks, especially when advanced AI is used in those attacks. This is particularly true because African countries are already lagging behind significantly in technical cybersecurity measures. The fact that African countries are integrating AI into critical sectors like healthcare and public service delivery only makes this a much more significant risk. AI systems are highly vulnerable to cyberattacks because they exhibit a large attack surface and at the moment many cyber defence techniques can be easily circumvented. In light of this, one of the ways the AU AI Strategy could have addressed this issue is by emphasising caution and care in the integration of AI in priority sectors.

Chemical, biological, radiological and nuclear agents

Chemical, biological, radiological and nuclear (CBRN) weapons have long been a cause for concern. According to a US Department of Homeland Security report, AI introduces a new level of danger. AI could potentially be harnessed to engineer deadly CBRN agents, allowing malicious actors to create and deploy these weapons. Furthermore, access to biofoundries through cloud labs, which allow fully automatic access to raw materials, exacerbates the problem.

The AU AI Strategy completely overlooks the topic of CBRN threats. This omission is surprising given that there have been discussions about such threats in fora like the EU CBRN Centres of Excellence – AU Forum. Further, African nations have historically been affected by these technologies, for instance, due to proxy wars and are likely to continue being affected as they become more accessible.

The AU AI Strategy could have proposed some measures to prevent CBRN threats. Similar to the section on engineered pandemics, the AU AI Strategy could have recommended that AU Member States establish Biosafety and Biosecurity standards for Biolabs that integrate AI into their procedures and what such standards should look like. It could have also recommended that African states design strategies to regulate who gets access to DNA synthesis.  A proposal on the safety measures African countries could take to protect CBRN material and AI technologies like Biological Design Tools from threat actors could have also been useful. For instance, the EU recommends satellite-assisted tracking of vehicles transiting these materials to prevent theft. African countries should also consider how to protect research, production, storage and trading infrastructure.

Advanced open-source AI

There is currently a robust debate going on about the benefits and risks of open-source AI. Open-source AI models are models whose components, such as model weights, model architecture, inference, and training codes, have been publicly released. Open-source advocates argue that it democratises AI development. Influential and well-resourced AI companies such as Meta strongly support open foundation models. In July, Meta released Llama 3.1, the largest-ever open-source AI model, which the company claims outperforms GPT-4o and Anthropic’s Claude 3.5 Sonnet.

Despite open-source AI being such a hot topic, it is barely discussed in the AU AI Strategy. The document only mentions open-source AI when urging AU Member States to leverage it to develop students’ skills and proposing regional and international collaboration in creating open science platforms to promote research and innovation. This is unexpected, especially since one of the AU AI Strategy’s primary concerns is that Africa lags behind in AI development due to low investment in digital technologies and a lack of skilled technical experts on the continent. Open-source AI could mitigate these issues to some degree. Open-sourcing allows for the customisation of pre-trained models through techniques like fine-tuning, and this lowers the barrier of entry for a number of actors because it is considerably more cost-effective than training a model from scratch.

That said, the AU AI Strategy fails to discuss how open-source AI could expose African societies to certain marginal risks. It also overlooks how actors on the continent might contribute to these risks (due to the democratisation of the technology) and what African countries should do in response to this possibility. This ILINA paper suggests that it is probable that some African countries, especially those listed high up in the  Global AI Index, will invest in resources that enable them to participate substantially in fine-tuning open-source AI models in the future. This could be problematic as it might result in the development of small but highly capable models that are misaligned. Additionally, it could lead to a rise in the malicious use of open-source models.

In light of these concerns, the AU AI Strategy could have addressed how African countries can regulate open-source AI and initiated discussions on potential liability frameworks for downstream harm caused by the modification of open-source AI. This paper recommends that AI developers who are planning to open-source their models should critically engage with potential stakeholders to discuss what broad impacts their models may have on them. Seeing that African countries are currently mainly positioned as “receivers” of open-source AI as opposed to developers, the Strategy could have discussed how African countries can give their input to developers and have a bearing on what developers decide because they may be adversely affected by open-source AI. The AU AI Strategy could have also discussed how the continent can more meaningfully leverage open-source AI for safety research. Open sourcing could promote transparency and safety in development by allowing external study and auditing of models, as well as verification of internal developer safety checks.

V. Verdict and Way Forward

Conclusion

My overall impression of the AU AI Strategy is that, while it embodies the spirit of “minimizing the risks associated with AI” and “developing responsible, safe, and secure AI,” it ultimately fails to address critical AI safety issues with the seriousness they deserve. The AU AI Strategy mentions existential risk once, recommending that the AU Commission coordinate Africa’s participation in global debates around AI governance and existential risk. Although this appears to be a diversion from the position in the AUDA-NEPAD White Paper, the AU AI Strategy still makes it seem like this is a conversation happening “out there”, so the continent might as well contribute.

Worryingly, the AU AI Strategy does not adequately address these risks, even though the regions leading AI development seem to do so. Historically, African countries have disproportionately suffered from the consequences of decisions made by other powerful states. Given the stakes, the AU AI Strategy should have focused more on how AU Member States can position themselves to mitigate these risks. The document correctly acknowledges that African countries need to be well-represented in global discussions to preserve their interests and share their perspectives. But it fails to show that if the continent does not want to continue playing catch-up, it needs to ensure that it is ready for all possible eventualities AI may bring.

Recommendations
AI safety funding

A good indicator of how serious a strategy takes an issue is the resources allocated towards it. The AU AI Strategy states that the AU Commission will develop a five-year implementation plan for the AU AI Strategy. This follow-up implementation plan should include a budget. The AU AI Strategy already points out that Africa invests in AI research insignificantly. In my opinion, the AU Commission should steer away from implementing funding schemes like the one in the AUDA-NEPAD Roadmap. The AUDA-NEPAD Roadmap suggests that the AU should adopt the AU AI Grant (US$100 million) and AU AI Investment (US$200 million) Funding Mechanisms. These funds are supposed to support and scale up AI startups focused on developing AI for Africa. My concern is how the AUDA-NEPAD Roadmap recommends these funds should be distributed. Seventy per cent (70%) of the AU AI Grant should be directed to startups, while thirty per cent (30%) should be directed to research. 

According to the Emerging Technology Observatory, despite AI safety-related publications increasing by three hundred and fifteen per cent (315%) between 2017 and 2022, AI safety research only makes up approximately two per cent (2%) of AI research. Meanwhile, some researchers have suggested an idea that I endorse: That considering the magnitude of harm that AI could cause, major developers and public institution funders should allocate at least one-third of their Research and Development budget towards AI safety. The AUDA-NEPAD Roadmap does not explicitly state how much of the 30% will be allocated specifically towards AI safety research. However, even from the overall percentage, development efforts appear to be the priority. Contrary to this, the implementation plan should allocate a substantial amount of funds towards AI safety.

Aside from this, the AUDA-NEPAD Roadmap proposes an AU AI challenge that awards startups and academic institutions between US$100,000 and US$2 million. The aims of the challenge would be to spur innovation and find solutions to tough challenges in Africa. I suggest that such projects should be executed with caution. The AUDA-NEPAD Roadmap states that the challenge is modelled after the Defence Advanced Research Projects Agency (DARPA) challenge under the US government. According to the AUDA-NEPAD Roadmap, the DARPA challenge faced controversy surrounding potential misuse of the technology being developed. Likewise, an AU challenge focused on scaling capabilities could also cause harm. Perhaps the AU could consider setting up a challenge that is focused on solving technical AI safety problems. This would fall within the AU AI Strategy’s recommendation to incentivise citizen-led safety solutions like bounty contests.

An African AI Safety Institute?

In the past few years, there has been a move towards establishing AI safety institutes, which aim to evaluate and ensure the safety of frontier AI models, in countries such as the US and the UK. The same could be reflected on the continent, seeing that the AU AI Strategy calls for intra-African cooperation on AI issues. For instance, on safety and security issues, the AU AI Strategy recommends establishing an expert group to carry out research on the impact of AI on peace and security and promote African participation in global governance discussions. It also proposes that a centre of excellence on AI safety and security be designated to analyse the risks of AI on digital environments, political systems, democratic institutions and critical infrastructure in Africa. Over time, it may be wise for such efforts to morph into building an African AI Safety Institute. National AI institutes on the continent could also find ways to synergise their efforts into safety research.

Sharpening any follow-up strategies

Further, phase I of the implementation plan requires developing further strategic documents. I would recommend that more specific follow-up strategies on AI be prepared. In this case, the AU could adopt a risk-based approach where it pronounces its position regarding the different levels of risk that differently capable AI poses. This step is likely to introduce more nuance to the AU’s stance on AI safety issues. It will also give the AU a chance to discuss crucial issues, such as existential risk, more seriously. In addition to this, wider consultations should take place to prepare these strategies to make them reflect more broad and inclusive positions. The logistics of good AU AI Strategy recommendations, for example the creation of algorithm and AI transparency registers, should also be followed up on in subsequent strategies.

Author Bio

Grace Chege

Grace is a Research Fellow at the ILINA Program. She has an undergraduate degree in law (first class honours) and is interested in AI Governance, Intellectual Property and how emerging technologies affect the “global majority.”

Acknowledgements

Special thanks to Cecil Abungu for helping me design, edit and thoroughly think through this piece, as well as encouraging me to put my work out there. Finally, special thanks to Mitchelle Kang’ethe and Nyatichi Mandi for their thoughtful feedback.

Contents

Contents

Share

Scroll to Top