2024 Fellows

Amelia Midwa

Amelia’s project will investigate whether US tort law can be interpreted to create post-deployment monitoring duties for developers of general-purpose and highly capable open-source AI models. She will particularly focus on the US tort law doctrines of negligence and product liability to explore her research question. General-purpose and highly capable open source AI models present significant risks such as misinformation and cyber attacks due to their potential for harmful fine-tuning after public release. Post-deployment monitoring duties could be one way to make developers take more responsibility for the release of such models. The project will begin with a desk-based review of literature around the doctrines. Thereafter, it will discuss the form which these duties can take and how they could apply to developers.

Grace Chege

Grace will develop a framework to guide AI companies on when to release their frontier AI model weights open-access. Her work will draw from proportionality, a well-established legal test used to achieve a balance between competing interests. Open-access AI could make powerful AI systems more accessible to malicious actors. However, it also presents benefits such as democratizing AI access and development. Grace will explore how proportionality is applied in armed conflict and human rights cases and extract some key principles. Using those principles, she will then construct guidelines for when it should be considered responsible to open-source a model. The guidelines will detail what kind of evidence makes it is safe to open-source and what companies should do before open-sourcing.

Jean Cheptumo

Jean’s research project will investigate and propose the most effective interventions that can be used to mitigate the projected rise of intensive animal agriculture in Africa. Jean’s research will draw insights from case studies that are comparable to intensive animal agriculture in Africa, for example the projected industrialisation of farming systems in Bhutan. By qualitatively analysing what produced the results in each case study, Jean will identify incentives and strategic decisions that can lead to the adoption of policies that curb the rise of intensive animal agriculture in Africa. 

Jesse Thaiya

Jesse’s research will explore the possibility of using limited liability as an incentive to get developers of frontier systems to report on harms and threats that may arise during the development of their advanced AI systems. It will investigate other areas where limited liability has proven helfpul in incentivizing actors and then consider the possibility of designing a limited liability framework that balances between holding parties responsible for the harms of their systems and allowing for crucial information about harms to be reported and evaluated.

Sharon Malonza

Sharon will work on a research project that aims to develop criteria for categorizing biological design tools of concern. In general, biological design tools (BDTs) are described as tools enabled by AI that are trained on biological data and used to design, simulate and analyse biological systems and components such as proteins, viral vectors and other biological agents. BDTs pose unique AI-driven biorisk by empowering skilled malicious actors with novel capabilities to create pathogens with pandemic potential and targeted biological weapons. Existing literature is yet to clearly define which kinds of BDTs are of concern, i.e. are particularly deserving of close scrutiny or regulatory action. By borrowing from the EU AI Act risk stratification framework, Sharon’s project will attempt to propose a definition that will further unlock productive conversations about the regulation of BDTs.

Tewodros Mesfin

Tewodros will use the period of his fellowship to upskill in technical AI safety. Under the guidance of ILINA Research Affiliate Sienka Dounia, he will study linear algebra, calculus, statistics, information theory, programming (with a focus on PyTorch), and machine learning. His focus will be on deep learning, interpretability, and hands-on technical AI safety projects from the Redwood Research MLAB and ARENA curricula, as well as recent promising AI safety agendas. Additionally, Tewodros will replicate a number of research papers and complete relevant courses in math.

Tomilayo Olatunde

Tomilayo’s project will examine the unique reporting system of the Financial Action Task Force (FATF) to draw insights that can inform the design of an international system for reporting of large scale AI training runs. There is evidence that FATF’s reporting system has had a very positive influence on the prevention of money laundering across various countries. Similarly, there is evidence that large-scale AI training runs could lead to the creation of dangerous AI systems and such training runs therefore require close monitoring. Tomilayo will analyse FATF’s mutual evaluation and assessment procedures to gain valuable insights into what has been effective for FATF and then use her results to make recommendations for effective international reporting frameworks for large-scale AI training runs.

Scroll to Top