Bridging philosophy and AI to explore computing ethics | MIT News

During a gathering of sophistication 6.C40/24.C40 (Ethics of Computing), Professor Armando Solar-Lezama poses the identical not possible query to his students that he often asks himself within the research he leads with the Computer Assisted Programming Group at MIT:

“How can we be sure that that a machine does what we wish, and only what we wish?”

At this moment, what some consider the golden age of generative AI, this may increasingly look like an urgent latest query. But Solar-Lezama, the Distinguished Professor of Computing at MIT, is quick to indicate that this struggle is as old as humankind itself.

He begins to retell the Greek myth of King Midas, the monarch who was granted the godlike power to remodel anything he touched into solid gold. Predictably, the wish backfired when Midas unintentionally turned everyone he loved into gilded stone.

“Watch out what you ask for since it is perhaps granted in ways you do not expect,” he says, cautioning his students, lots of them aspiring mathematicians and programmers.

Digging into MIT archives to share slides of grainy black-and-white photographs, he narrates the history of programming. We hear in regards to the Seventies Pygmalion machine that required incredibly detailed cues, to the late ’90s computer software that took teams of engineers years and an 800-page document to program.

While remarkable of their time, these processes took too long to achieve users. They left no room for spontaneous discovery, play, and innovation.

Solar-Lezama talks in regards to the risks of constructing modern machines that do not at all times respect a programmer’s cues or red lines, and which can be equally able to exacting harm as saving lives.

Titus Roesler, a senior majoring in electrical engineering, nods knowingly. Roesler is writing his final paper on the ethics of autonomous vehicles and weighing who’s morally responsible when one hypothetically hits and kills a pedestrian. His argument questions underlying assumptions behind technical advances, and considers multiple valid viewpoints. It leans on the philosophy theory of utilitarianism. Roesler explains, “Roughly, in response to utilitarianism, the moral thing to do brings about probably the most good for the best number of individuals.”

MIT philosopher Brad Skow, with whom Solar-Lezama developed and is team-teaching the course, leans forward and takes notes.

A category that demands technical and philosophical expertise

Ethics of Computing, offered for the primary time in Fall 2024, was created through the Common Ground for Computing Education, an initiative of the MIT Schwarzman College of Computing that brings multiple departments together to develop and teach latest courses and launch latest programs that mix computing with other disciplines.

The instructors alternate lecture days. Skow, the Laurance S. Rockefeller Professor of Philosophy, brings his discipline’s lens for examining the broader implications of today’s ethical issues, while Solar-Lezama, who can be the associate director and chief operating officer of MIT’s Computer Science and Artificial Intelligence Laboratory, offers perspective through his.

Skow and Solar-Lezama attend each other’s lectures and adjust their follow-up class sessions in response. Introducing the element of learning from each other in real time has made for more dynamic and responsive class conversations. A recitation to interrupt down the week’s topic with graduate students from philosophy or computer science and a vigorous discussion mix the course content.

“An outsider might think that that is going to be a category that can be sure that that these latest computer programmers being sent into the world by MIT at all times do the fitting thing,” Skow says. Nonetheless, the category is intentionally designed to show students a distinct skill set.

Determined to create an impactful semester-long course that did greater than lecture students about right or fallacious, philosophy professor Caspar Hare conceived the concept for Ethics of Computing in his role as an associate dean of the Social and Ethical Responsibilities of Computing. Hare recruited Skow and Solar-Lezama because the lead instructors, as he knew they may do something more profound than that.

“Considering deeply in regards to the questions that come up on this class requires each technical and philosophical expertise. There aren’t other classes at MIT that place each side-by-side,” Skow says.

That is exactly what drew senior Alek Westover to enroll. The mathematics and computer science double major explains, “A variety of persons are talking about how the trajectory of AI will look in five years. I believed it was vital to take a category that can help me think more about that.”

Westover says he’s drawn to philosophy due to an interest in ethics and a desire to tell apart right from fallacious. In math classes, he’s learned to put in writing down an issue statement and receive easy clarity on whether he’s successfully solved it or not. Nonetheless, in Ethics of Computing, he has learned the way to make written arguments for “tricky philosophical questions” that will not have a single correct answer.

For instance, “One problem we may very well be concerned about is, what happens if we construct powerful AI agents that may do any job a human can do?” Westover asks. “If we’re interacting with these AIs to that degree, should we be paying them a salary? How much should we care about what they need?”

There is not any easy answer, and Westover assumes he’ll encounter many other dilemmas within the workplace in the long run.

“So, is the web destroying the world?”

The semester began with a deep dive into AI risk, or the notion of “whether AI poses an existential risk to humanity,” unpacking free will, the science of how our brains make decisions under uncertainty, and debates in regards to the long-term liabilities, and regulation of AI. A second, longer unit zeroed in on “the web, the World Wide Web, and the social impact of technical decisions.” The top of the term looks at privacy, bias, and free speech.

One class topic was dedicated to provocatively asking: “So, is the web destroying the world?”

Senior Caitlin Ogoe is majoring in Course 6-9 (Computation and Cognition). Being in an environment where she will be able to examine a lot of these issues is precisely why the self-described “technology skeptic” enrolled within the course.

Growing up with a mom who’s hearing impaired and slightly sister with a developmental disability, Ogoe became the default member of the family whose role it was to call providers for tech support or program iPhones. She leveraged her skills right into a part-time job fixing cell phones, which paved the best way for her to develop a deep interest in computation, and a path to MIT. Nonetheless, a prestigious summer fellowship in her first 12 months made her query the ethics behind how consumers were impacted by the technology she was helping to program. 

“Every little thing I’ve done with technology is from the attitude of individuals, education, and private connection,” Ogoe says. “It is a area of interest that I like. Taking humanities classes around public policy, technology, and culture is one among my big passions, but that is the primary course I’ve taken that also involves a philosophy professor.”

The next week, Skow lectures on the role of bias in AI, and Ogoe, who’s entering the workforce next 12 months, but plans to eventually attend law school to give attention to regulating related issues, raises her hand to ask questions or share counterpoints 4 times.

Skow digs into examining COMPAS, a controversial AI software that uses an algorithm to predict the likelihood that individuals accused of crimes would go on to re-offend. In line with a 2018 ProPublica article, COMPAS was prone to flag Black defendants as future criminals and gave false positives at twice the speed because it did to white defendants.

The category session is devoted to determining whether the article warrants the conclusion that the COMPAS system is biased and must be discontinued. To accomplish that, Skow introduces two different theories on fairness:

“Substantive fairness is the concept a selected end result is perhaps fair or unfair,” he explains. “Procedural fairness is about whether the procedure by which an end result is produced is fair.” A wide range of conflicting criteria of fairness are then introduced, and the category discusses which were plausible, and what conclusions they warranted in regards to the COMPAS system.

Afterward, the 2 professors go upstairs to Solar-Lezama’s office to debrief on how the exercise had gone that day.

“Who knows?” says Solar-Lezama. “Perhaps five years from now, everybody will laugh at how people were apprehensive in regards to the existential risk of AI. But one among the themes I see running through this class is learning to approach these debates beyond media discourse and attending to the underside of pondering rigorously about these issues.”