AI pareidolia: Can machines spot faces in inanimate objects? | MIT News

Date:

Blackberrys New [CPS] IN
EzCater US
Kinguin WW

In 1994, Florida jewelry designer Diana Duyser discovered what she believed to be the Virgin Mary’s image in a grilled cheese sandwich, which she preserved and later auctioned for $28,000. But how much do we actually understand about pareidolia, the phenomenon of seeing faces and patterns in objects once they aren’t really there? 

A brand new study from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) delves into this phenomenon, introducing an in depth, human-labeled dataset of 5,000 pareidolic images, far surpassing previous collections. Using this dataset, the team discovered several surprising results concerning the differences between human and machine perception, and the way the flexibility to see faces in a slice of toast may need saved your distant relatives’ lives.

“Face pareidolia has long fascinated psychologists, nevertheless it’s been largely unexplored in the pc vision community,” says Mark Hamilton, MIT PhD student in electrical engineering and computer science, CSAIL affiliate, and lead researcher on the work. “We desired to create a resource that would help us understand how each humans and AI systems process these illusory faces.”

So what did all of those fake faces reveal? For one, AI models don’t seem to acknowledge pareidolic faces like we do. Surprisingly, the team found that it wasn’t until they trained algorithms to acknowledge animal faces that they became significantly higher at detecting pareidolic faces. This unexpected connection hints at a possible evolutionary link between our ability to identify animal faces — crucial for survival — and our tendency to see faces in inanimate objects. “A result like this seems to suggest that pareidolia won’t arise from human social behavior, but from something deeper: like quickly spotting a lurking tiger, or identifying which way a deer is looking so our primordial ancestors could hunt,” says Hamilton.

One other intriguing discovery is what the researchers call the “Goldilocks Zone of Pareidolia,” a category of images where pareidolia is almost definitely to occur. “There’s a particular range of visual complexity where each humans and machines are almost definitely to perceive faces in non-face objects,” William T. Freeman, MIT professor of electrical engineering and computer science and principal investigator of the project says. “Too easy, and there’s not enough detail to form a face. Too complex, and it becomes visual noise.”

To uncover this, the team developed an equation that models how people and algorithms detect illusory faces.  When analyzing this equation, they found a transparent “pareidolic peak” where the likelihood of seeing faces is highest, corresponding to photographs which have “just the fitting amount” of complexity. This predicted “Goldilocks zone” was then validated in tests with each real human subjects and AI face detection systems.

3 photos of clouds above 3 photos of a fruit tart. The left photo of each is “Too Simple” to perceive a face; the middle photo is “Just Right,” and the last photo is “Too Complex

This recent dataset, “Faces in Things,” dwarfs those of previous studies that typically used only 20-30 stimuli. This scale allowed the researchers to explore how state-of-the-art face detection algorithms behaved after fine-tuning on pareidolic faces, showing that not only could these algorithms be edited to detect these faces, but that they may also act as a silicon stand-in for our own brain, allowing the team to ask and answer questions on the origins of pareidolic face detection which are unattainable to ask in humans. 

To construct this dataset, the team curated roughly 20,000 candidate images from the LAION-5B dataset, which were then meticulously labeled and judged by human annotators. This process involved drawing bounding boxes around perceived faces and answering detailed questions on each face, equivalent to the perceived emotion, age, and whether the face was accidental or intentional. “Gathering and annotating hundreds of images was a monumental task,” says Hamilton. “Much of the dataset owes its existence to my mom,” a retired banker, “who spent countless hours lovingly labeling images for our evaluation.”

Video thumbnail

Play video

Can AI Spot Faces in Objects?
Video: MIT CSAIL

The study also has potential applications in improving face detection systems by reducing false positives, which could have implications for fields like self-driving cars, human-computer interaction, and robotics. The dataset and models could also help areas like product design, where understanding and controlling pareidolia could create higher products. “Imagine having the ability to mechanically tweak the design of a automobile or a baby’s toy so it looks friendlier, or ensuring a medical device doesn’t inadvertently appear threatening,” says Hamilton.

“It’s fascinating how humans instinctively interpret inanimate objects with human-like traits. As an illustration, whenever you glance at an electrical socket, you may immediately envision it singing, and you’ll be able to even imagine how it could ‘move its lips.’ Algorithms, nevertheless, don’t naturally recognize these cartoonish faces in the identical way we do,” says Hamilton. “This raises intriguing questions: What accounts for this difference between human perception and algorithmic interpretation? Is pareidolia useful or detrimental? Why don’t algorithms experience this effect as we do? These questions sparked our investigation, as this classic psychological phenomenon in humans had not been thoroughly explored in algorithms.”

Because the researchers prepare to share their dataset with the scientific community, they’re already looking ahead. Future work may involve training vision-language models to know and describe pareidolic faces, potentially resulting in AI systems that may engage with visual stimuli in additional human-like ways.

“It is a delightful paper! It’s fun to read and it makes me think. Hamilton et al. propose a tantalizing query: Why will we see faces in things?” says Pietro Perona, the Allen E. Puckett Professor of Electrical Engineering at Caltech, who was not involved within the work. “As they indicate, learning from examples, including animal faces, goes only half-way to explaining the phenomenon. I bet that fascinated with this query will teach us something vital about how our visual system generalizes beyond the training it receives through life.”

Hamilton and Freeman’s co-authors include Simon Stent, staff research scientist on the Toyota Research Institute; Ruth Rosenholtz, principal research scientist within the Department of Brain and Cognitive Sciences, NVIDIA research scientist, and former CSAIL member; and CSAIL affiliates postdoc Vasha DuTell, Anne Harrington MEng ’23, and Research Scientist Jennifer Corbett. Their work was supported, partially, by the National Science Foundation and the CSAIL MEnTorEd Opportunities in Research (METEOR) Fellowship, while being sponsored by the USA Air Force Research Laboratory and the USA Air Force Artificial Intelligence Accelerator. The MIT SuperCloud and Lincoln Laboratory Supercomputing Center provided HPC resources for the researchers’ results.

This work is being presented this week on the European Conference on Computer Vision.

Share post:

Wicked Weasel WW
Banggood WW
ChicMe WW

Popular

More like this
Related

Keke Palmer Says Ryan Murphy ‘Ripped’ Into Her On Day Off

The Los Angeles Times recently caught up with the...

This Ambitious Project Desires to Sequence the DNA of All Complex Life on Earth

“We’re only just starting to know the complete majesty...

Jon Jones vs. Alex Pereira for the BMF title? It’s an idea

Jon Jones may or may not proceed fighting beyond...