To present AI-focused women academics and others their well-deserved — and overdue — time within the highlight, TechCrunch is launching a series of interviews specializing in remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces all year long because the AI boom continues, highlighting key work that usually goes unrecognized. Read more profiles here.
Anna Korhonen is a professor of natural language processing (NLP) on the University of Cambridge. She’s also a senior research fellow at Churchill College, a fellow on the Association for Computational Linguistics, and a fellow on the European Laboratory for Learning and Intelligent Systems.
Korhonen previously served as a fellow on the Alan Turing Institute and he or she has a PhD in computer science and master’s degrees in each computer science and linguistics. She researches NLP and the way to develop, adapt and apply computational techniques to fulfill the needs of AI. She has a specific interest in responsible and “human-centric” NLP that — in her own words — “draws on the understanding of human cognitive, social and artistic intelligence.”
Q&A
Briefly, how did you get your start in AI? What attracted you to the sector?
I used to be at all times fascinated by the sweetness and complexity of human intelligence, particularly in relation to human language. Nonetheless, my interest in STEM subjects and practical applications led me to check engineering and computer science. I selected to concentrate on AI since it’s a field that enables me to mix all these interests.
What work are you most happy with within the AI field?
While the science of constructing intelligent machines is fascinating, and one can easily wander off on this planet of language modeling, the final word reason we’re constructing AI is its practical potential. I’m most happy with the work where my fundamental research on natural language processing has led into the event of tools that may support social and global good. For instance, tools that may also help us higher understand how diseases akin to cancer or dementia develop and may be treated, or apps that may support education.
Much of my current research is driven by the mission to develop AI that may improve human lives for the higher. AI has an enormous positive potential for social and global good. A giant a part of my job as an educator is to encourage the following generation of AI scientists and leaders to give attention to realizing that potential.
How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
I’m fortunate to be working in an area of AI where we do have a large female population and established support networks. I’ve found these immensely helpful in navigating profession and private challenges.
For me, the largest problem is how the male-dominated industry sets the agenda for AI. The present arms race to develop ever-larger AI models at any cost is an incredible example. This has a big impact on the priorities of each academia and industry, and wide-ranging socioeconomic and environmental implications. Do we’d like larger models, and what are their global costs and advantages? I feel we might’ve asked these questions so much earlier in the sport if we had higher gender balance in the sector.
What advice would you give to women searching for to enter the AI field?
AI desperately needs more women in any respect levels, but especially at the extent of leadership. The present leadership culture isn’t necessarily attractive for ladies, but energetic involvement can change that culture — and ultimately the culture of AI. Women are infamously not at all times great at supporting one another. I would love to see an attitude change on this respect: We’d like to actively network and help one another if we wish to attain higher gender balance on this field.
What are a number of the most pressing issues facing AI because it evolves?
AI has developed incredibly fast: It has evolved from a tutorial field to a worldwide phenomenon in lower than a single decade. During this time, most effort has gone toward scaling through massive data and computation. Little effort has been dedicated to pondering how this technology must be developed in order that it will probably best serve humanity. People have reason to fret concerning the safety and trustworthiness of AI and its impact on jobs, democracy, environment and other areas. We’d like to urgently put human needs and safety at the middle of AI development.
What are some issues AI users should pay attention to?
Current AI, even when seeming highly fluent, ultimately lacks the world knowledge of humans and the flexibility to grasp the complex social contexts and norms we operate with. Even the perfect of today’s technology makes mistakes, and our ability to forestall or predict those mistakes is proscribed. AI could be a very great tool for a lot of tasks, but I’d not trust it to teach my children or make essential decisions for me. We humans should remain in charge.
What’s the perfect option to responsibly construct AI?
Developers of AI are inclined to take into consideration ethics as an afterthought — after the technology has already been built. The perfect option to give it some thought is before any development begins. Questions akin to, “Do I actually have a various enough team to develop a good system?” or “Is my data really free to make use of and representative of all of the users’ populations?” or “Are my techniques robust?” should really be asked on the outset.
Although we will address a few of this problem via education, we will only implement it via regulation. The recent development of national and global AI regulations is vital and desires to proceed to ensure that future technologies might be safer and more trustworthy.
How can investors higher push for responsible AI?
AI regulations are emerging and corporations will ultimately have to comply. We will consider responsible AI as sustainable AI truly value investing in.