To present AI-focused women academics and others their well-deserved — and overdue — time within the highlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to the AI revolution. We’re publishing these pieces all year long because the AI boom continues, highlighting key work that usually goes unrecognized. Read more profiles here.
Chinasa T. Okolo is a fellow on the Brookings Instutition within the Center of Technology Innovation’s Governance Studies program. Before that, she served on the ethics and social impact committee that helped develop Nigeria’s National Artificial Intelligence Strategy and has served as an AI policy and ethics advisor for various organizations, including the Africa Union Development Agency and the Quebec Artificial Intelligence Institute. She recently received a Ph.D in computer science from Cornell University, where she researched how AI impacts the Global South.
Briefly, how did you get your start in AI? What attracted you to the sphere?
I initially transitioned into AI because I saw how computational techniques could advance biomedical research and democratize access to healthcare for marginalized communities. During my last 12 months of undergrad [at Pomona College], I started research with a human-computer interaction professor, which exposed me to the challenges of bias inside AI. During my Ph.D, I became interested by understanding how these issues would impact people within the Global South, who represent a majority of the world’s population and are sometimes excluded from and underrepresented in AI development.
What work are you most pleased with (within the AI field)?
I’m incredibly pleased with my work with the African Union (AU) on developing the AU-AI Continental Strategy for Africa, which goals to assist AU member states prepare for the responsible adoption, development, and governance of AI. The drafting of the strategy took over 1.5 years and was released in late February 2024. It’s now in an open feedback period with the goal of being formally adopted by AU member states in early 2025.
As a first-generation Nigerian-American who grew up in Kansas City, MO, and didn’t leave the States until studying abroad during undergrad, I at all times aimed to center my profession inside Africa. Engaging in such impactful work so early in my profession makes me excited to pursue similar opportunities to assist shape inclusive, global AI governance.
How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
Finding community with those that share my values has been essential in navigating the male-dominated tech and AI industries.
I’ve been fortunate to see many advances in responsible AI and outstanding research exposing the harms of AI being led by Black women scholars like Timnit Gebru, Safiya Noble, Abeba Birhane, Ruha Benjamin, Joy Buolamwini, and Deb Raji, lots of whom I’ve been in a position to connect with over the past few years.
Seeing their leadership has motivated me to proceed my work on this field and shown me the worth of going “against the grain” to make a meaningful impact.
What advice would you give to women searching for to enter the AI field?
Don’t be intimidated by a scarcity of a technical background. The sector of AI is multi-dimensional and wishes expertise from various domains. My research has been influenced heavily by sociologists, anthropologists, cognitive scientists, philosophers, and others throughout the humanities and social sciences.
What are a few of the most pressing issues facing AI because it evolves?
One of the crucial outstanding issues might be improving the equitable representation of non-Western cultures in outstanding language and multimodal models. The overwhelming majority of AI models are trained in English and on data that primarily represents Western contexts, which leaves out useful perspectives from nearly all of the world.
Moreover, the race towards constructing larger models will result in a better depletion of natural resources and greater climate change impacts, which already disproportionately impact Global South countries.
What are some issues AI users should pay attention to?
A major variety of AI tools and systems which were put into public deployment overstate their capabilities and easily don’t work. Many tasks people aim to make use of AI for could likely be solved through simpler algorithms or basic automation.
Moreover, generative AI has the capability to exacerbate harms observed from earlier AI tools. For years, we’ve seen how these tools exhibit bias and result in harmful decision-making against vulnerable communities, which is able to likely increase as generative AI grows in scale and reach.
Nevertheless, enabling individuals with the knowledge to know the restrictions of AI may help improve the responsible adoption and usage of those tools. Improving AI and data literacy inside most people will turn into fundamental as AI tools rapidly turn into integrated into society.
What’s the perfect method to responsibly construct AI?
The perfect method to responsibly construct AI is to be critical of the intended and unintended use cases for these tools. People constructing AI systems have the responsibility to object to AI getting used for harmful scenarios in warfare and policing and may seek external guidance if AI is acceptable for other use cases they could be targeting. On condition that AI is commonly an amplifier of existing social inequalities, it is usually imperative that developers and researchers be cautious in how they construct and curate datasets which are used to coach AI models.
How can investors higher push for responsible AI?
Many argue that rising VC interest in “cashing out” on the present AI wave has accelerated the rise of “AI snake oil,” coined by Arvind Narayanan and Sayash Kapoor. I agree with this sentiment and imagine that investors must take leadership positions, together with academics, civil society stakeholders, and industry members, to advocate for responsible AI development. As an angel investor myself, I actually have seen many dubious AI tools available on the market. Investors also needs to spend money on AI expertise to vet firms and request external audits of tools demoed in pitch decks.
The rest you want so as to add?
This ongoing “AI summer” has led to a proliferation of “AI experts” who often detract from necessary conversations on present-day risks and harms of AI and present misleading information on the capabilities of AI-enabled tools. I encourage those interested by educating themselves on AI to be critical of those voices and seek reputable sources to learn from.