Dust in the Machine

Applying machine and deep learning to satellite imagery is leading to more accurate detection and tracking of large-scale dust events.
author-share
Image
True color image showing a large brown cloud of dust extending from the east coast of Africa across the Atlantic Ocean.
A massive brownish cloud of dust flows off the coast of Africa and spreads over the Atlantic Ocean in this Suomi National Polar-orbiting Partnership (Suomi NPP) corrected reflectance (true color) Visible Infrared Imaging Radiometer Suite (VIIRS) image acquired on 20 June 2020. This dust eventually settled over the Caribbean, northern South America, and the U.S. Southeast and Gulf Coast. Interactively explore this image using NASA Worldview. NASA Worldview image.

Dust is the single largest component of materials in Earth’s atmosphere, according to NOAA. Depending on the composition and size of the dust particles, along with the height of particles in the atmosphere, these aerosols can remain suspended for days or even weeks. Since dust is composed of minerals that absorb and reflect light, it can be detected by sensors aboard Earth observing satellites. Clouds, smoke, and darkness, however, can make it difficult to distinguish dust from other elements in satellite imagery.

A joint effort by NASA’s Short-term Prediction Research and Transition (SPoRT) project and Interagency Implementation and Advanced Concepts Team (IMPACT) using machine learning (ML) and deep learning (DL) is improving dust detection and enabling more rapid delivery of information about ongoing dust storms to operational meteorologists. Because atmospheric dust can have significant positive and negative impacts, knowing where dust is headed and where it might settle is critical information.

As noted by the World Meteorological Organization, atmospheric dust absorbs and scatters solar radiation entering Earth’s atmosphere and absorbs long-wave radiation emitted from the surface. It also transports nutrients that can be redistributed as it circles Earth. On the negative side, dust storms can significantly impact aircraft operations and the transportation of goods and services on highways and roads. Additionally, dust is a vector for diseases like meningitis and valley fever, and can exacerbate conditions like asthma and bronchitis.

Image
Comparison of images showing the deep magenta and purple colors given to dust in RGB imagery versus the difficulty of seeing dust in visible imagery.
Comparison of Dust RGB GOES-16 imagery (top image) with visible imagery (lower image). Image from Fuell, K. 2017. “Dust Imagery GOES Beyond Visible,” NASA SPoRT, The Wide World of SPoRT website.

A major improvement in dust detection was the development of the Dust RGB (red-green-blue) imagery product by the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) for the MeteoSat Second Generation (MSG) Spinning Enhanced Visible and InfraRed Imager (SEVIRI). Dust RGB processing was applied to imagery acquired by the Advanced Baseline Imager (ABI) instrument aboard the NASA/NOAA Geostationary Operational Environmental Satellite (GOES)-R series of satellites, which includes the operational GOES-16 and GOES-17 spacecraft and the scheduled GOES-T (December 2021) and GOES-U (2024) satellites.

RGB composite imagery combines data from multiple spectral channels to highlight specific phenomena (including dust, land surface change, and even power outages). The Dust RGB product uses channel combinations specifically tailored to enable dust detection. Dust appears pink/magenta during the day and appears as a varying magenta/purple color at night depending on the height of the dust plume. Even with the enhanced ability to evaluate and track dust using the Dust RGB product, though, this imagery has limitations. One significant limitation is that as Earth’s surface cools at night, the RGB imagery surface color fades to a shade of magenta that is very close to the same magenta/purple color of dust.

The work by SPoRT and IMPACT applying machine learning (ML) and deep learning (DL) to GOES Dust RGB imagery is helping alleviate this issue through the creation of a model that predicts where dust appears in an image. ML is a subfield of artificial intelligence that uses statistics and mathematical models to detect patterns in data. DL is a subfield of ML that uses interconnected algorithms called neural networks. Neural networks attempt to mimic the way the human brain operates in decision-making processes. Using large amounts of training data, a machine can be taught to recognize patterns in data and make decisions on its own for the recognition of similar patterns in real-time data.

“What we wanted to do is use remote sensing principles to develop a physically-based machine learning approach to identifying dust in satellite imagery, especially at night,” explains Dr. Emily Berndt, SPoRT’s remote sensing lead and the principal investigator (PI) of the dust ML effort.

SPoRT develops ways for the operational weather community to use satellite observations and research capabilities to improve short-term regional and local weather forecasts and is funded through NASA’s Research and Analysis Program Weather and Atmospheric Dynamics Focus Area. Berndt notes that machine learning was an area in which SPoRT did not have much experience. Fortunately, the application of machine learning and similar technologies is a major focus of IMPACT work.

IMPACT is a component of NASA’s Earth Science Data Systems (ESDS) Program that prototypes the latest technologies to support new science and applications from Earth observation data. The application of machine learning and deep learning to dust RGB imagery was a tailor-made SPoRT/IMPACT collaboration.

Image
Comparison of Dust RGB imagery with model prediction of dust showing how close the model represents the actual dust in the image.
Left image: Nighttime Dust RGB image of Texas acquired on 14 April 2018 with a magenta area of dust across the center of the image between darker red areas of clouds. Right image: Random forest model probabilities of dust for the same time, with red indicating a high probability of dust. Note how the predicted dust in the model lines up with the detected dust in the RGB imagery. SPoRT image.

The teams divided their work along areas of expertise. SPoRT developed a labeled training dataset composed of dust imagery along with a simple random forest machine learning model. A random forest model is a method of classifying data based on decision trees; the output of the random forest is the class selected by the most decision trees, in this case the probability of dust in an image. IMPACT machine learning experts provided feedback and advice that helped drive the development of both the machine learning and deep learning approaches. While SPoRT developed and validated a simple ML approach, IMPACT explored the feasibility of a DL approach to improve dust detection capabilities even further.

“We are computer scientists at IMPACT; we don't have much expertise looking at Earth events or science phenomena,” says Muthukumaran Ramasubramanian, an IMPACT machine learning researcher on the project. “Having a collaboration from an outside team that is an expert in this area was really good.”

To train the dust detection model, the SPoRT team identified dust events, collected corresponding imagery, and used these to create a classification layer in GOES imagery. Polygons were manually drawn on GOES Dust RGB images to classify areas of dust and non-dust. These labeled classifications were fed into the model during training along with the RGB imagery bands. The team then provided unlabeled real-time imagery to the model to generate predictions of dust based on the training data to verify how accurately the machine was able to classify each image pixel as either containing dust or no dust.

Image
four images showing the accuracy of model predictions to human-labeled areas of dust.
Comparison of dust prediction accuracy from dust events labeled by humans (left images) and machine-created model predictions (right images) in GOES-16 Dust RGB imagery. NASA IMPACT images.

In images where the background was similar in color to dust, the team found that the model correctly labeled about 85% of the dust images and 99% of the non-dust images. The team refined the model to better delineate dust from smoke, further reducing dust detection errors. The result is a machine-based model capable of accurately detecting and tracking ongoing dust events, especially events occurring at night or transitioning from day into night.

While the simple machine learning approach was able to accurately distinguish dust plume boundaries at night, IMPACT’s application of deep learning was a better approach for addressing the complex image classification the team was tackling. As Muthukumaran points out, there were challenges in developing the deep learning component of this effort.

For one, the IMPACT team did not have many dust cases to use for training the model due to the infrequent and seasonal occurrence of major dust events. As Muthukumaran notes, a good segmentation model covering a large region needs thousands of cases for training the machine model. In this instance, the SPoRT team was able to provide a little over 100 cases to the IMPACT team. Another challenge was trying to create one model that could work across both daytime and nighttime. “Trying to make a single model is a harder problem than having two different models that work on daytime and nighttime separately,” Muthukumaran says.

The value of IMPACT’s DL approach is that it accounts for spatial variability within the image to make predictions with reasonable performance metrics. By developing a physically based approach relying on remote sensing principles, the processes driving the onset and the progression of high impact dust events can be studied more easily. Having model output that better delineates the spatial boundary of a dust storm, especially at night, can provide more lead time for determining when a dust-related hazard might exist and enable forecasters to more accurately designate watch and warning areas for dust storms.

“A lot of [forecasters] have known areas or zones that they refer to as source regions [for dust storms] and they monitor these areas,” says Kevin Fuell, a SPoRT team member who worked with end users to assess the capability of the SPoRT ML model. “Some of these source regions can produce plumes that are fairly small or narrow. If the machine learning probability output is providing a low probability of a dust event, it gives forecasters a heads up that something is starting, as opposed to in the current era where they have to wait until the plume is visible to know that something is starting up.”

To test the viability of the ML model in real-world conditions, SPoRT plans to partner with the National Weather Service to evaluate the utility of the ML model output with operational meteorologists.

The IMPACT team also intends to continue their deep learning efforts on dust detection and will incorporate this work in their Phenomena Detection Portal, which serves as an event catalog for results achieved using DL models.

“As an event catalog, [the Portal] will keep marking events daily and let users view the product using the [Portal] or API [application programming interface],” says Muthukumaran, who notes that dust is a logical addition to the Portal. “We have created a data model and have expertise where we can operationalize this model and run [it] daily on large-scale satellite imagery.”

The application of machine learning to dust is also helping uncover some of the relationships in the imagery. As Berndt observes, machine learning enables a closer examination of the processes involved in the development, lofting, and movement of dust. Along with these scientific applications, being able to better track dust at night also has practical applications including more rapid warnings and better awareness of situations that could be hazardous to air traffic, ground transportation, and human health.

“From a machine learning standpoint, this is a relatively simple model,” says Nicholas Elmer, the SPoRT team member who worked on the model development. “We’re just giving it satellite imagery and there are many other directions we can go. I think these results provide a lot of optimism for future capabilities in this area.”

Read more about the work:

Berndt, E., et al. 2021. A Machine Learning Approach to Objective Identification of Dust in Satellite Imagery. Earth and Space Science, 28 May 2021 [doi:10.1029/2021EA001788].

Koehl, D. 2020. Building Better Dust Detection. IMPACT Blog, 28 August 2020, https://bit.ly/2UddUVo.

Ramasubramanian, M., et al. 2021. Day time and Nighttime Dust Event Segmentation using Deep Convolutional Neural Networks. SoutheastCon 2021, pp. 1-5 [doi:10.1109/SoutheastCon45413.2021.9401859].

Last Updated