Women in AI: Emilia Gómez on the EU began her AI profession with music

Date:

EzCater US
Blackberrys New [CPS] IN
Kinguin WW

To offer AI-focused women academics and others their well-deserved — and overdue — time within the highlight, TechCrunch is launching a series of interviews specializing in remarkable women who’ve contributed to the AI revolution. We’ll publish pieces all year long because the AI boom continues, highlighting key work that always goes unrecognized. Read more profiles here.

Emilia Gómez is a principal investigator on the European Commission’s Joint Research Centre and scientific coordinator of AI Watch, the EC initiative to observe the advancements, uptake and impact of AI in Europe. Her team contributes with scientific and technical knowledge to EC AI policies, including the recently proposed AI Act.

Gómez’s research is grounded within the computational music field, where she contributes to the understanding of the best way humans describe music and the methods wherein it’s modeled digitally. Ranging from the music domain, Gómez investigates the impact of AI on human behavior — specifically the results on jobs, decisions and child cognitive and socioemotional development.

Q&A

Briefly, how did you get your start in AI? What attracted you to the sector?

I began my research in AI, specifically in machine learning, as a developer of algorithms for the automated description of music audio signals when it comes to melody, tonality, similarity, style or emotion, that are exploited in several applications from music platforms to education. I began to research learn how to design novel machine learning approaches coping with different computational tasks within the music field, and on the relevance of the info pipeline including data set creation and annotation. What I liked on the time from machine learning was its modelling capabilities and the shift from knowledge-driven to data-driven algorithm design — e.g. as a substitute of designing descriptors based on our knowledge of acoustics and music, we were now using our know-how to design data sets, architectures and training and evaluation procedures.

From my experience as a machine learning researcher, and seeing my algorithms “in motion” in several domains, from music platforms to symphonic music live shows, I noticed the massive impact that those algorithms have on people (e.g. listeners, musicians) and directed my research toward AI evaluation slightly than development, specifically on studying the impact of AI on human behavior and learn how to evaluate systems when it comes to facets resembling fairness, human oversight or transparency. That is my team’s current research topic on the Joint Research Centre.

What work are you most pleased with (within the AI field)?

On the tutorial and technical side, I’m pleased with my contributions to music-specific machine learning architectures on the Music Technology Group in Barcelona, which have advanced the cutting-edge in the sector, because it’s reflected in my citation records. As an illustration, during my PhD I proposed a data-driven algorithm to extract tonality from audio signals (e.g. if a musical piece is in C major or D minor) which has develop into a key reference in the sector, and later I co-designed machine learning methods for the automated description of music signals when it comes to melody (e.g. used to go looking for songs by humming), tempo or for the modeling of emotions in music. Most of those algorithms are currently integrated into Essentia, an open source library for audio and music evaluation, description and synthesis and have been exploited in lots of recommender systems.

I’m particularly pleased with Banda Sonora Vital (LifeSoundTrack), a project awarded by Red Cross Award for Humanitarian Technologies, where we developed a customized music recommender adapted to senior Alzheimer patients. There’s also PHENICX, a big European Union (EU)-funded project I coordinated on using music; and AI to create enriched symphonic music experiences.

I really like the music computing community and I used to be glad to develop into the primary female president of the International Society for Music Information Retrieval, to which I’ve been contributing all my profession, with a special interest in increasing diversity in the sector.

Currently, in my role on the Commission, which I joined in 2018 as lead scientist, I provide scientific and technical support to AI policies developed within the EU, notably the AI Act. From this recent work, which is less visible when it comes to publications, I’m pleased with my humble technical contributions to the AI Act — I say “humble” as you might guess there are lots of people involved here! For example, there’s lots of work I contributed to on the harmonization or translation between legal and technical terms (e.g. proposing definitions grounded in existing literature) and on assessing the sensible implementation of legal requirements, resembling transparency or technical documentation for high-risk AI systems, general-purpose AI models and generative AI.

I’m also quite pleased with my team’s work in supporting the EU AI liability directive, where we studied, amongst others, particular characteristics that make AI systems inherently dangerous, resembling lack of causality, opacity, unpredictability or their self- and continuous-learning capabilities, and assessed associated difficulties presented in the case of proving causation.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

It’s not only tech — I’m also navigating a male-dominated AI research and policy field! I don’t have a method or a technique, because it’s the one environment I do know. I don’t understand how it could be to work in a various or a female-dominated working environment. “Wouldn’t it’s nice?,” just like the Beach Boys’ song goes. I truthfully attempt to avoid frustration and have a good time on this difficult scenario, working in a world dominated by very assertive guys and having fun with collaborating with excellent women in the sector.

What advice would you give to women searching for to enter the AI field?

I might tell them two things:

You’re much needed — please enter our field, as there’s an urgent need for diversity of visions, approaches and concepts. As an illustration, based on the divinAI project — a project I co-founded on monitoring diversity within the AI field — only 23% of creator names on the International Conference on Machine Learning and 29% on the International Joint Conference on AI in 2023 were female, no matter their gender identity.

You aren’t alone — there are lots of women, nonbinary colleagues and male allies in the sector, though we is probably not so visible or recognized. Search for them and get their mentoring and support! On this context, there are lots of affinity groups present within the research field. As an illustration, after I became president of the International Society for Music Information Retrieval, I used to be very energetic within the Women in Music Information Retrieval initiative, a pioneer in diversity efforts in music computing with a really successful mentoring program.

What are a few of the most pressing issues facing AI because it evolves?

For my part, researchers should devote as many efforts to AI development as to AI evaluation, as there’s now a scarcity of balance. The research community is so busy advancing the cutting-edge when it comes to AI capabilities and performance and so excited to see their algorithms utilized in the true world that they forget to do proper evaluations, impact assessment and external audits. The more intelligent AI systems are, the more intelligent their evaluations must be. The AI evaluation field is under-studied, and that is the reason behind many incidents that give AI a nasty status, e.g. gender or racial biases present in data sets or algorithms.

What are some issues AI users should pay attention to?

Residents using AI-powered tools, like chatbots, should know that AI shouldn’t be magic. Artificial intelligence is a product of human intelligence. They need to learn in regards to the working principles and limitations of AI algorithms to give you the chance to challenge them and use them in a responsible way. It’s also necessary for residents to be told in regards to the quality of AI products, how they’re assessed or certified, in order that they know which of them they will trust.

What’s one of the best technique to responsibly construct AI?

For my part, one of the best technique to develop AI products (with a very good social and environmental impact and in a responsible way) is to spend the needed resources on evaluation, assessment of social impact and mitigation of risks — for example, to fundamental rights — before placing an AI system out there. That is for the good thing about businesses and trust on products, but in addition of society.

Responsible AI or trustworthy AI is a technique to construct algorithms where facets resembling transparency, fairness, human oversight or social and environmental well-being have to be addressed from the very starting of the AI design process. On this sense, the AI Act not only sets the bar for regulating artificial intelligence worldwide, however it also reflects the European emphasis on trustworthiness and transparency — enabling innovation while protecting residents’ rights. This I feel will increase citizen trust within the product and technology.

Share post:

ChicMe WW
Wicked Weasel WW
Banggood WW

Popular

More like this
Related