Women in AI: Allison Cohen on constructing responsible AI projects

Date:

Cotosen WW
Giftmio [Lifetime] Many GEOs
Pheromones
Boutiquefeel WW

To present AI-focused women academics and others their well-deserved — and overdue — time within the highlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to the AI revolution. We’re publishing these pieces all year long because the AI boom continues, highlighting key work that usually goes unrecognized. Read more profiles here.

Within the highlight today: Allison Cohen, the senior applied AI projects manager at Mila, a Quebec-based community of greater than 1,200 researchers specializing in AI and machine learning. She works with researchers, social scientists and external partners to deploy socially useful AI projects. Cohen’s portfolio of labor features a tool that detects misogyny, an app to discover online activity from suspected human trafficking victims, and an agricultural app to recommend sustainable farming practices in Rwanda.

Previously, Cohen was a co-lead on AI drug discovery on the Global Partnership on Artificial Intelligence, a corporation to guide the responsible development and use of AI. She’s also served as an AI strategy consultant at Deloitte and a project consultant on the Center for International Digital Policy, an independent Canadian think tank.

Q&A

Briefly, how did you get your start in AI? What attracted you to the sector?

The conclusion that we could mathematically model the whole lot from recognizing faces to negotiating trade deals modified the best way I saw the world, which is what made AI so compelling to me. Sarcastically, now that I work in AI, I see that we are able to’t — and in lots of cases shouldn’t — be capturing these sorts of phenomena with algorithms.

I used to be exposed to the sector while I used to be completing a master’s in global affairs on the University of Toronto. This system was designed to show students to navigate the systems affecting the world order — the whole lot from macroeconomics to international law to human psychology. As I learned more about AI, though, I recognized how vital it will develop into to world politics, and the way vital it was to teach myself on the subject.

What allowed me to interrupt into the sector was an essay-writing competition. For the competition, I wrote a paper describing how psychedelic drugs would help humans stay competitive in a labor market riddled with AI, which qualified me to attend the St. Gallen Symposium in 2018 (it was a creative writing piece). My invitation, and subsequent participation in that event, gave me the boldness to proceed pursuing my interest in the sector.

What work are you most happy with within the AI field?

One in all the projects I managed involved constructing a dataset containing instances of subtle and overt expressions of bias against women.

For this project, staffing and managing a multidisciplinary team of natural language processing experts, linguists and gender studies specialists throughout your complete project life cycle was crucial. It’s something that I’m quite happy with. I learned firsthand why this process is prime to constructing responsible applications, and likewise why it’s not done enough — it’s exertions! In the event you can support each of those stakeholders in communicating effectively across disciplines, you possibly can facilitate work that blends decades-long traditions from the social sciences and cutting-edge developments in computer science.

I’m also proud that this project was well received by the community. One in all our papers got a highlight recognition within the socially responsible language modeling workshop at one in every of the leading AI conferences, NeurIPS. Also, this work inspired an identical interdisciplinary process that was managed by AI Sweden, which adapted the work to suit Swedish notions and expressions of misogyny.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

It’s unlucky that in such a cutting-edge industry, we’re still seeing problematic gender dynamics. It’s not only adversely affecting women — all of us are losing. I’ve been quite inspired by an idea called “feminist standpoint theory” that I learned about in Sasha Costanza-Chock’s book, “Design Justice.”

The idea claims that marginalized communities, whose knowledge and experiences don’t profit from the identical privileges as others, have an awareness of the world that may bring about fair and inclusive change. In fact, not all marginalized communities are the identical, and neither are the experiences of people inside those communities.

That said, a wide range of perspectives from those groups are critical in helping us navigate, challenge and dismantle all types of structural challenges and inequities. That’s why a failure to incorporate women can keep the sector of AI exclusionary for a good wider swath of the population, reinforcing power dynamics outside of the sector as well.

By way of how I’ve handled a male-dominated industry, I’ve found allies to be quite vital. These allies are a product of strong and trusting relationships. For instance, I’ve been very fortunate to have friends like Peter Kurzwelly, who’s shared his expertise in podcasting to support me within the creation of a female-led and -centered podcast called “The World We’re Constructing.” This podcast allows us to raise the work of much more women and non-binary people in the sector of AI.

What advice would you give to women looking for to enter the AI field?

Find an open door. It doesn’t must be paid, it doesn’t must be a profession and it doesn’t even must be aligned together with your background or experience. In the event you can find a gap, you need to use it to hone your voice within the space and construct from there. In the event you’re volunteering, give it your all — it’ll assist you to stand out and hopefully receives a commission to your work as soon as possible.

In fact, there’s privilege in having the ability to volunteer, which I also need to acknowledge.

After I lost my job through the pandemic and unemployment was at an all-time high in Canada, only a few corporations were trying to hire AI talent, and those who were hiring weren’t in search of global affairs students with eight months’ experience in consulting. While applying for jobs, I started volunteering with an AI ethics organization.

One in all the projects I worked on while volunteering was about whether there needs to be copyright protection for art produced by AI. I reached out to a lawyer at a Canadian AI law firm to raised understand the space. She connected me with someone at CIFAR, who connected me with Benjamin Prud’homme, the chief director of Mila’s AI for Humanity Team. It’s amazing to think that through a series of exchanges about AI art, I learned a few profession opportunity that has since transformed my life.

What are among the most pressing issues facing AI because it evolves?

I even have three answers to this query which might be somewhat interconnected. I believe we’d like to work out:

  1. Find out how to reconcile the proven fact that AI is built to be scaled while ensuring that the tools we’re constructing are adapted to suit local knowledge, experience and wishes.
  2. If we’re to construct tools which might be adapted to the local context, we’re going to wish to include anthropologists and sociologists into the AI design process. But there are a plethora of incentive structures and other obstacles stopping meaningful interdisciplinary collaboration. How can we overcome this?
  3. How can we affect the design process much more profoundly than simply incorporating multidisciplinary expertise? Specifically, how can we alter the incentives such that we’re designing tools built for individuals who need it most urgently quite than those whose data or business is most profitable?

What are some issues AI users should pay attention to?

Labor exploitation is one in every of the problems that I don’t think gets enough coverage. There are various AI models that learn from labeled data using supervised learning methods. When the model relies on labeled data, there are those that must do that tagging (i.e., someone adds the label “cat” to a picture of a cat). These people (annotators) are sometimes the topics of exploitative practices. For models that don’t require the info to be labeled through the training process (as is the case with some generative AI and other foundation models), datasets can still be built exploitatively in that the developers often don’t obtain consent nor provide compensation or credit to the info creators.

I’d recommend testing the work of Krystal Kauffman, who I used to be so glad to see featured on this TechCrunch series. She’s making headway in advocating for annotators’ labor rights, including a living wage, the tip to “mass rejection” practices, and engagement practices that align with fundamental human rights (in response to developments like intrusive surveillance).

What’s the very best strategy to responsibly construct AI?

Folks often look to moral AI principles so as to claim that their technology is responsible. Unfortunately, ethical reflection can only begin after a variety of decisions have already been made, including but not limited to:

  1. What are you constructing?
  2. How are you constructing it?
  3. How will it’s deployed?

In the event you wait until after these decisions have been made, you’ll have missed countless opportunities to construct responsible technology.

In my experience, the very best strategy to construct responsible AI is to be cognizant of — from the earliest stages of your process — how your problem is defined and whose interests it satisfies; how the orientation supports or challenges pre-existing power dynamics; and which communities might be empowered or disempowered through the AI’s use.

If you must create meaningful solutions, you have to navigate these systems of power thoughtfully.

How can investors higher push for responsible AI?

Ask in regards to the team’s values. If the values are defined, at the very least, partially, by the local people and there’s a level of accountability to that community, it’s more likely that the team will incorporate responsible practices.

Share post:

Popular

More like this
Related

Yo Gotti Shows Love With Lavish Birthday Trip

Yo Gotti is making it clear that he’s not...

Not much of a feat, but not less than, Terrafirma’s in win column

Stanley Pringle and Terrafirma had good enough reasons to...

Release date, price, and contents for Terrifier bundle

Halloween events are at all times an enormous deal...

Volcanoes may help reveal interior heat on Jupiter moon

By staring into the hellish landscape of Jupiter's moon...