2023 Fellows
Ahmed Ghoor
Ahmed’s project aims to engage in a meaningful collaboration with Muslim communities to address long-term global catastrophic risks. Through comprehensive research, the project intends to explore where Islamic intellectual thought intersects, diverges and can contribute to current long-term risk mitigation strategies. Based on this research, Ahmed plans to create educational resources, map out advocacy strategies and develop a fellowship program to support Muslims who are passionate about long-term risk mitigation.
Diana Owuor
Diana will work on a research project that aims to investigate the influence that the BRICS collaboration could have on the international governance of Artificial General Intelligence (AGI) development. Existing proposals for governing the development of AGI seem to underappreciate the underlying motivations that ground the relations and goals of the BRICS group of nations. Diana’s project will try to uncover what these goals and motivations could mean for the kinds of international AGI governance interventions that are likely to be most workable.
Jacob Ayang
Jacob’s project will investigate poultry farmers’ attitudes towards cage-free farming while identifying challenges and exploring solutions and financial implications of cage-free transitions. Although the focus of his work will be Ghana, his findings will be useful for other African countries as well. The project will provide an enhanced understanding of the poultry industry in Ghana and offer insights into potential areas for initiatives to support the industry’s transition from caged to cage-free systems of egg production.
Marie-Victoire Iradukunda
Marie will work on a research project where she will assess the impact that highly capable AI will have on asylum decision-making in the long-term future. Given current trends, highly capable AI is very likely to have capabilities such as context-specific decision making and continual learning. Such capabilities will entice policymakers to push for the use of AI systems in decisions regarding asylum applications, and as asylum seeker and refugee flows increase in the future (as anticipated by the International Organisation for Migration (IOM) for example), many policymakers will have the goal of limiting the number of asylum seekers and refugees in their countries. The project will show that in order to meet this goal of limiting numbers, highly-capable AI will almost certainly take into account factors that should not be considered. Marie will therefore argue that those who care about justice for asylum seekers have very good reason to worry about highly capable AI in the long term.
Mark Lenny Gitau
Mark’s research project will examine how best to design and optimize the decision-making of an international AI institution focused on setting technical standards regarding the training and deployment of frontier AI models. The project will follow 3 routes. First, Mark will delineate the spectrum of key decisions this institution is likely to make. Second, he will draw lessons from existing bodies to sketch out an argument on the sorts of actors that should be involved in the crucial decisions that the institution will make. Finally, he will propose decision-adoption mechanisms that would enable the institution to consistently and positively influence the trajectory of AI development. Through the project, Mark aims to add a unique perspective to the ever-growing literature touching on international AI governance.
Mitchelle Kang’ethe
Muthoni Wanjiku
Muthoni’s research project will examine whether transformative AI could lead to advanced neurotechnology that leaves human autonomy in an undesirable state. The project will show how transformative AI could turbocharge Human-Computer Brain Interfaces (BCIs), and how this may create an opening for the direct manipulation of brain mechanisms. If this is satisfied, her project will propose interventions whose focus will be the preservation of human autonomy.
Sienka Dounia
Sienka will use his time in the fellowship to upskill in the field of technical AI alignment. He will be closely guided by Jake Mendel, independent AI alignment researcher and intern at David Krueger’s lab at the University of Cambridge. As part of his upskilling, Sienka will participate in reading groups such as Artificial General Intelligence Safety Fundamentals 201, Principles of Intelligent Behavior in Biological and Social Systems reading group and Pytorch & Keras fundamentals. He will also take courses in linear algebra and information theory.
Stacy Gatumbo
Stacy Gatumbo will design and run a 16-hour outreach seminar for a select group of Kenyan undergraduate students. The seminar will focus on improving participants’ understanding of social impact, helping them get better at clear thinking and giving them insights on how to think about having impactful careers. Eventually, it is hoped that seminar participants will go on to seek other opportunities that leave them in a good position to have high-impact careers.
Tim Sankara
Tim will use his time in the fellowship to upskill in the field of technical AI alignment. He will be closely guided by Jake Mendel, independent AI alignment researcher and intern at David Krueger’s lab at the University of Cambridge. As part of his upskilling, Tim will participate in reading groups such as Artificial General Intelligence Safety Fundamentals 201, Principles of Intelligent Behavior in Biological and Social Systems reading group and Pytorch & Keras fundamentals. He will also take courses in linear algebra and information theory.