To offer AI-focused women academics and others their well-deserved — and overdue — time within the highlight, TechCrunch is launching a series of interviews specializing in remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces all year long because the AI boom continues, highlighting key work that usually goes unrecognized. Read more profiles here.
Kristine Gloria leads the Aspen Institute’s Emergent and Intelligent Technologies Initiative — the Aspen Institute being the Washington, D.C.-headquartered think tank focused on values-based leadership and policy expertise. Gloria holds a PhD in cognitive science and a Master’s in media studies, and her past work includes research at MIT’s Web Policy Research Initiative, the San Francisco-based Startup Policy Lab and the Center for Society, Technology and Policy at UC Berkeley.
Q&A
Briefly, how did you get your start in AI? What attracted you to the sector?
To be frank, I definitely didn’t start my profession in pursuit of being in AI. First, I used to be really serious about understanding the intersection of technology and public policy. On the time, I used to be working on my Master’s in media studies, exploring ideas around remix culture and mental property. I used to be living and dealing in D.C. as an Archer Fellow for the Latest America Foundation. At some point, I distinctly remember sitting in a room stuffed with public policymakers and politicians who were throwing around terms that didn’t quite fit their actual technical definitions. It was shortly after this meeting that I noticed that to be able to move the needle on public policy, I needed the credentials. I went back to highschool, earning my doctorate in cognitive science with a concentration on semantic technologies and online consumer privacy. I used to be very fortunate to have found a mentor and advisor and lab that encouraged a cross-disciplinary understanding of how technology is designed and built. So, I sharpened my technical skills alongside developing a more critical viewpoint on the numerous ways tech intersects our lives. In my role because the director of AI on the Aspen Institute, I then had the privilege to ideate, engage and collaborate with a number of the leading thinkers in AI. And I all the time found myself gravitating towards those that took the time to deeply query if and the way AI would impact our day-to-day lives.
Through the years, I’ve led various AI initiatives and one of the vital meaningful is just getting began. Now, as a founding team member and director of strategic partnerships and innovation at a brand new nonprofit, Young Futures, I’m excited to weave in such a considering to attain our mission of constructing the digital world a better place to grow up. Specifically, as generative AI becomes table stakes and as latest technologies come online, it’s each urgent and significant that we help preteens, teens and their support units navigate this vast digital wilderness together.
What work are you most pleased with (within the AI field)?
I’m most pleased with two initiatives. First is my work related to surfacing the tensions, pitfalls and effects of AI on marginalized communities. Published in 2021, “Power and Progress in Algorithmic Bias” articulates months of stakeholder engagement and research around this issue. Within the report, we posit considered one of my all-time favorite questions: “How can we (data and algorithmic operators) recast our own models to forecast for a special future, one which centers across the needs of essentially the most vulnerable?” Safiya Noble is the unique creator of that query, and it’s a continuing consideration throughout my work. The second most significant initiative recently got here from my time as head of Data at Blue Fever, an organization on the mission to enhance youth well-being in a judgment-free and inclusive online space. Specifically, I led the design and development of Blue, the primary AI emotional support companion. I learned quite a bit on this process. Most saliently, I gained a profound latest appreciation for the impact a virtual companion can have on someone who’s struggling or who may not have the support systems in place. Blue was designed and built to bring its “big-sibling energy” to assist guide users to reflect on their mental and emotional needs.
How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?
Unfortunately, the challenges are real and still very current. I’ve experienced my fair proportion of disbelief in my skills and experience amongst all sorts of colleagues within the space. But, for each single considered one of those negative challenges, I can point to an example of a male colleague being my fiercest cheerleader. It’s a troublesome environment, and I hold on to those examples to assist manage. I also think that a lot has modified on this space even within the last five years. The crucial skill sets and skilled experiences that qualify as a part of “AI” usually are not strictly computer science-focused anymore.
What advice would you give to women looking for to enter the AI field?
Enter in and follow your curiosity. This space is in constant motion, and essentially the most interesting (and sure most efficient) pursuit is to constantly be critically optimistic concerning the field itself.
What are a number of the most pressing issues facing AI because it evolves?
I actually think a number of the most pressing issues facing AI are the identical issues we’ve not quite gotten right for the reason that web was first introduced. These are issues around agency, autonomy, privacy, fairness, equity and so forth. These are core to how we situate ourselves amongst the machines. Yes, AI could make it vastly more complicated — but so can socio-political shifts.
What are some issues AI users should pay attention to?
AI users should pay attention to how these systems complicate or enhance their very own agency and autonomy. As well as, because the discourse around how technology, and particularly AI, may impact our well-being, it’s vital to recollect there are tried-and-true tools to administer more negative outcomes.
What’s the most effective approach to responsibly construct AI?
A responsible construct of AI is greater than just the code. A very responsible construct takes into consideration the design, governance, policies and business model. All drive the opposite, and we’ll proceed to fall short if we only strive to handle one a part of the construct.
How can investors higher push for responsible AI
One specific task, which I like Mozilla Ventures for requiring in its diligence, is an AI model card. Developed by Timnit Gebru and others, this practice of making model cards enables teams — like funders — to guage the risks and issues of safety of AI models utilized in a system. Also, related to the above, investors should holistically evaluate the system in its capability and skill to be built responsibly. For instance, if you’ve trust and safety features within the construct or a model card published, but your revenue model exploits vulnerable population data, then there’s misalignment to your intent as an investor. I do think you possibly can construct responsibly and still be profitable. Lastly, I might like to see more collaborative funding opportunities amongst investors. Within the realm of well-being and mental health, the solutions shall be varied and vast as no one is similar and nobody solution can solve for all. Collective motion amongst investors who’re serious about solving the issue could be a welcome addition.