Visualizing the potential impacts of a hurricane on people’s homes before it hits will help residents prepare and judge whether to evacuate.
MIT scientists have developed a technique that generates satellite imagery from the longer term to depict how a region would take care of a possible flooding event. The tactic combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, showing where flooding is prone to occur given the strength of an oncoming storm.
As a test case, the team applied the strategy to Houston and generated satellite images depicting what certain locations around town would appear like after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with actual satellite images taken of the identical regions after Harvey hit. Additionally they compared AI-generated images that didn’t include a physics-based flood model.
The team’s physics-reinforced method generated satellite images of future flooding that were more realistic and accurate. The AI-only method, in contrast, generated images of flooding in places where flooding isn’t physically possible.
The team’s method is a proof-of-concept, meant to display a case during which generative AI models can generate realistic, trustworthy content when paired with a physics-based model. With the intention to apply the strategy to other regions to depict flooding from future storms, it’s going to should be trained on many more satellite images to find out how flooding would look in other regions.
“The thought is: Sooner or later, we could use this before a hurricane, where it provides an extra visualization layer for the general public,” says Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “Certainly one of the largest challenges is encouraging people to evacuate after they are in danger. Possibly this could possibly be one other visualization to assist increase that readiness.”
As an example the potential of the brand new method, which they’ve dubbed the “Earth Intelligence Engine,” the team has made it available as a web based resource for others to try.
The researchers report their results today within the journal IEEE Transactions on Geoscience and Distant Sensing. The study’s MIT co-authors include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, professor of AeroAstro and director of the MIT Media Lab; together with collaborators from multiple institutions.
Generative adversarial images
The brand new study is an extension of the team’s efforts to use generative AI tools to visualise future climate scenarios.
“Providing a hyper-local perspective of climate appears to be probably the most effective strategy to communicate our scientific results,” says Newman, the study’s senior writer. “People relate to their very own zip code, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal, and relatable.”
For this study, the authors use a conditional generative adversarial network, or GAN, a form of machine learning method that may generate realistic images using two competing, or “adversarial,” neural networks. The primary “generator” network is trained on pairs of real data, reminiscent of satellite images before and after a hurricane. The second “discriminator” network is then trained to differentiate between the actual satellite imagery and the one synthesized by the primary network.
Each network robotically improves its performance based on feedback from the opposite network. The thought, then, is that such an adversarial push and pull should ultimately produce synthetic images which can be indistinguishable from the actual thing. Nevertheless, GANs can still produce “hallucinations,” or factually incorrect features in an otherwise realistic image that shouldn’t be there.
“Hallucinations can mislead viewers,” says Lütjens, who began to wonder if such hallucinations could possibly be avoided, such that generative AI tools might be trusted to assist inform people, particularly in risk-sensitive scenarios. “We were pondering: How can we use these generative AI models in a climate-impact setting, where having trusted data sources is so vital?”
Flood hallucinations
Of their recent work, the researchers considered a risk-sensitive scenario during which generative AI is tasked with creating satellite images of future flooding that could possibly be trustworthy enough to tell decisions of find out how to prepare and potentially evacuate people out of harm’s way.
Typically, policymakers can get an idea of where flooding might occur based on visualizations in the shape of color-coded maps. These maps are the ultimate product of a pipeline of physical models that typically begins with a hurricane track model, which then feeds right into a wind model that simulates the pattern and strength of winds over a neighborhood region. That is combined with a flood or storm surge model that forecasts how wind might push any nearby body of water onto land. A hydraulic model then maps out where flooding will occur based on the local flood infrastructure and generates a visible, color-coded map of flood elevations over a specific region.
“The query is: Can visualizations of satellite imagery add one other level to this, that may be a bit more tangible and emotionally engaging than a color-coded map of reds, yellows, and blues, while still being trustworthy?” Lütjens says.
The team first tested how generative AI alone would produce satellite images of future flooding. They trained a GAN on actual satellite images taken by satellites as they omitted Houston before and after Hurricane Harvey. Once they tasked the generator to provide recent flood images of the identical regions, they found that the pictures resembled typical satellite imagery, but a more in-depth look revealed hallucinations in some images, in the shape of floods where flooding shouldn’t be possible (as an example, in locations at higher elevation).
To cut back hallucinations and increase the trustworthiness of the AI-generated images, the team paired the GAN with a physics-based flood model that comes with real, physical parameters and phenomena, reminiscent of an approaching hurricane’s trajectory, storm surge, and flood patterns. With this physics-reinforced method, the team generated satellite images around Houston that depict the identical flood extent, pixel by pixel, as forecasted by the flood model.
“We show a tangible strategy to mix machine learning with physics for a use case that’s risk-sensitive, which requires us to investigate the complexity of Earth’s systems and project future actions and possible scenarios to maintain people out of harm’s way,” Newman says. “We are able to’t wait to get our generative AI tools into the hands of decision-makers at the local people level, which could make a major difference and maybe save lives.”
The research was supported, partially, by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud.