Validation technique could help scientists make more accurate forecasts | MIT News

Do you have to grab your umbrella before you walk out the door? Checking the weather forecast beforehand will only be helpful if that forecast is accurate.

Spatial prediction problems, like weather forecasting or air pollution estimation, involve predicting the worth of a variable in a brand new location based on known values at other locations. Scientists typically use tried-and-true validation methods to find out how much to trust these predictions.

But MIT researchers have shown that these popular validation methods can fail quite badly for spatial prediction tasks. This might lead someone to imagine that a forecast is accurate or that a brand new prediction method is effective, when in point of fact that shouldn’t be the case.

The researchers developed a method to evaluate prediction-validation methods and used it to prove that two classical methods will be substantively incorrect on spatial problems. They then determined why these methods can fail and created a brand new method designed to handle the sorts of data used for spatial predictions.

In experiments with real and simulated data, their recent method provided more accurate validations than the 2 most typical techniques. The researchers evaluated each method using realistic spatial problems, including predicting the wind speed on the Chicago O-Hare Airport and forecasting the air temperature at five U.S. metro locations.

Their validation method could possibly be applied to a variety of problems, from helping climate scientists predict sea surface temperatures to aiding epidemiologists in estimating the consequences of air pollution on certain diseases.

“Hopefully, it will result in more reliable evaluations when persons are coming up with recent predictive methods and a greater understanding of how well methods are performing,” says Tamara Broderick, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), a member of the Laboratory for Information and Decision Systems and the Institute for Data, Systems, and Society, and an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Broderick is joined on the paper by lead creator and MIT postdoc David R. Burt and EECS graduate student Yunyi Shen. The research might be presented on the International Conference on Artificial Intelligence and Statistics.

Evaluating validations

Broderick’s group has recently collaborated with oceanographers and atmospheric scientists to develop machine-learning prediction models that will be used for problems with a robust spatial component.

Through this work, they noticed that traditional validation methods will be inaccurate in spatial settings. These methods hold out a small amount of coaching data, called validation data, and use it to evaluate the accuracy of the predictor.

To search out the foundation of the issue, they conducted a radical evaluation and determined that traditional methods make assumptions which might be inappropriate for spatial data. Evaluation methods depend on assumptions about how validation data and the information one desires to predict, called test data, are related.

Traditional methods assume that validation data and test data are independent and identically distributed, which means that the worth of any data point doesn’t depend upon the opposite data points. But in a spatial application, this is usually not the case.

As an example, a scientist could also be using validation data from EPA air pollution sensors to check the accuracy of a technique that predicts air pollution in conservation areas. Nevertheless, the EPA sensors are usually not independent — they were sited based on the placement of other sensors.

As well as, perhaps the validation data are from EPA sensors near cities while the conservation sites are in rural areas. Because these data are from different locations, they likely have different statistical properties, so that they are usually not identically distributed.

“Our experiments showed that you just get some really incorrect answers within the spatial case when these assumptions made by the validation method break down,” Broderick says.

The researchers needed to give you a brand new assumption.

Specifically spatial

Pondering specifically a couple of spatial context, where data are gathered from different locations, they designed a technique that assumes validation data and test data vary easily in space.

As an example, air pollution levels are unlikely to alter dramatically between two neighboring houses.

“This regularity assumption is suitable for a lot of spatial processes, and it allows us to create a technique to evaluate spatial predictors within the spatial domain. To the perfect of our knowledge, nobody has done a scientific theoretical evaluation of what went incorrect to give you a greater approach,” says Broderick.

To make use of their evaluation technique, one would input their predictor, the locations they wish to predict, and their validation data, then it mechanically does the remainder. Ultimately, it estimates how accurate the predictor’s forecast might be for the placement in query. Nevertheless, effectively assessing their validation technique proved to be a challenge.

“We are usually not evaluating a technique, as an alternative we’re evaluating an evaluation. So, we needed to step back, consider carefully, and get creative concerning the appropriate experiments we could use,” Broderick explains.

First, they designed several tests using simulated data, which had unrealistic points but allowed them to rigorously control key parameters. Then, they created more realistic, semi-simulated data by modifying real data. Finally, they used real data for several experiments.

Using three sorts of data from realistic problems, like predicting the value of a flat in England based on its location and forecasting wind speed, enabled them to conduct a comprehensive evaluation. In most experiments, their technique was more accurate than either traditional method they compared it to.

In the long run, the researchers plan to use these techniques to enhance uncertainty quantification in spatial settings. In addition they want to search out other areas where the regularity assumption could improve the performance of predictors, comparable to with time-series data.

This research is funded, partly, by the National Science Foundation and the Office of Naval Research.