What is Remote Sensing?
Remote sensing is the acquiring of information from a distance. NASA observes Earth and other planetary bodies via remote sensors on satellites and aircraft that detect and record reflected or emitted energy. Remote sensors, which provide a global perspective and a wealth of data about Earth systems, enable data-informed decision making based on the current and future state of our planet.
- Observing with the Electromagnetic Spectrum
- Data Processing, Interpretation, and Analysis
- Data Pathfinders
For more information, check out NASA's Interagency Implementation and Advanced Concepts Team (IMPACT) Tech Talk: From Pixels to Products: An Overview of Satellite Remote Sensing.
There are three primary types of orbits in which satellites reside: polar; non-polar, low-Earth orbit, and geostationary.
Polar-orbiting satellites are in an orbital plane that is inclined at nearly 90 degrees to the equatorial plane. This inclination allows the satellite to sense the entire globe, including the polar regions, providing observations of locations that are difficult to reach via the ground. Many polar-orbiting satellites are considered sun-synchronous, meaning that the satellite passes over the same location at the same solar time each cycle.
Polar orbits can be ascending or descending. In ascending orbits, satellites are moving south to north when their path crosses the equator. In descending orbits, satellites are moving north to south. The joint NASA/NOAA Suomi National Polar-orbiting Partnership (Suomi NPP) is an example of a polar orbiting satellite that provides daily coverage of the globe.
Non-polar, low-Earth orbits are at an altitude of typically less than 2,000 km above Earth’s surface. (For reference, the International Space Station orbits at an altitude of ~400 km.) These orbits do not provide global coverage but instead cover only a partial range of latitudes. The Global Precipitation Mission (GPM) is an example of a non-polar, low-Earth orbit satellite covering from 65 degrees north to 65 degrees south.
Geostationary satellites follow Earth’s rotation and travel at the same rate of the rotation; because of this, the satellites appear to an observer on Earth to be fixed in one location. These satellites capture the same view of Earth with each observation and so provide almost continuous coverage of one area. Weather satellites such as the Geostationary Operational Environmental Satellite (GOES) series are examples of geostationary satellites.
Observing with the Electromagnetic Spectrum
Electromagnetic energy, produced by the vibration of charged particles, travels in the form of waves through the atmosphere and the vacuum of space. These waves have different wavelengths (the distance from wave crest to wave crest) and frequencies; a shorter wavelength means a higher frequency. Some, like radio, microwave, and infrared waves, have a longer wavelength, while others, such as ultraviolet, x-rays, and gamma rays, have a much shorter wavelength. Visible light sits in the middle of that range of long to shortwave radiation. This small portion of energy is all that the human eye is able to detect. Instrumentation is needed to detect all other forms of electromagnetic energy. NASA instrumentation utilizes the full range of the spectrum to explore and understand processes occurring here on Earth and on other planetary bodies.
Some waves are absorbed or reflected by elements in the atmosphere, like water vapor and carbon dioxide, while some wavelengths allow for unimpeded movement through the atmosphere; visible light has wavelengths that can be transmitted through the atmosphere. Microwave energy has wavelengths that can pass through clouds; many of our weather and communication satellites take advantage of this.The primary source of the energy observed by satellites, is the Sun. The amount of the Sun’s energy reflected depends on the roughness of the surface and its albedo, which is how well a surface reflects light instead of absorbing it. Snow, for example, has a very high albedo, reflecting up to 90% of the energy it receives from the Sun, whereas the ocean reflects only about 6%, absorbing the rest. Often, when energy is absorbed, it is re-emitted, usually at longer wavelengths. For example, the energy absorbed by the ocean gets re-emitted as infrared radiation.
All things on Earth reflect, absorb, or transmit energy, the amount of which varies by wavelength. Everything on Earth has a unique spectral “fingerprint,” just as your fingerprint is unique to you. Researchers can use this information to identify different Earth features, as well as different rock and mineral types. The number of spectral bands detected by a given instrument, its spectral resolution, determines how much differentiation a researcher can identify between materials.
For more information on the electromagnetic spectrum, with companion videos, view NASA's Tour of the Electromagnetic Spectrum.
Sensors, or instruments, aboard satellites and aircraft use the Sun as a source of illumination or provide their own source of illumination, measuring the energy that is reflected back. Sensors that use natural energy from the Sun are called passive sensors; those that provide their own source of energy are called active sensors.
Passive sensors include different types of radiometers (instruments that quantitatively measure the intensity of electromagnetic radiation in select bands) and spectrometers (devices that are designed to detect, measure, and analyze the spectral content of reflected electromagnetic radiation). Most passive systems used by remote sensing applications operate in the visible, infrared, thermal infrared, and microwave portions of the electromagnetic spectrum. These sensors measure land and sea surface temperature, vegetation properties, cloud and aerosol properties, and other physical properties.
Note that most passive sensors cannot penetrate dense cloud cover and thus have limitations observing areas like the tropics where dense cloud cover is frequent.
Active sensors include different types of radio detection and ranging (radar) sensors, altimeters, and scatterometers. The majority of active sensors operate in the microwave band of the electromagnetic spectrum, which gives them the ability to penetrate the atmosphere under most conditions. These types of sensors are useful for measuring the vertical profiles of aerosols, forest structure, precipitation and winds, sea surface topography, and ice, among others.
The Earthdata page Remote Sensors provides a list of all of NASA’s Earth science passive and active sensors. What is Synthetic Aperture Radar? provides specific information on this type of active radar sensor.
Resolution plays a role in how data from a sensor can be used. Depending on the satellite’s orbit and sensor design, resolution can vary. There are four types of resolution to consider for any dataset—radiometric, spatial, spectral, and temporal.
Radiometric resolution is the amount of information in each pixel, i.e. the number of bits representing the energy recorded. Each bit records an exponent of power 2. For example, an 8 bit resolution is 28, which indicates that the sensor has 256 potential digital values (0-255) to store information. Thus, the higher the radiometric resolution, the more values are available to store information, providing better discrimination between even the slightest differences in energy. For example, when assessing water quality, radiometric resolution is necessary to distinguish between subtle differences in ocean color.
Spatial resolution is defined by the size of each pixel within a digital image and the area on Earth’s surface represented by that pixel. For example, the majority of the bands observed by the Moderate Resolution Imaging Spectroradiometer (MODIS), have a spatial resolution of 1km; each pixel represents a 1 km x 1km area on the ground. MODIS also includes bands with a spatial resolution of 250 m or 500 m. The finer the resolution (the lower the number), the more detail you can see. In the image below, you can see the difference in pixelation between a 30 m/pixel image, a 100 m/pixel image, and a 300 m/pixel image.
Spectral resolution is the ability of a sensor to discern finer wavelengths, that is, having more and narrower bands. Many sensors are considered to be multispectral, meaning they have between 3-10 bands. Sensors that have hundreds to even thousands of bands are considered to be hyperspectral. The narrower the range of wavelengths for a given band, the finer the spectral resolution. For example, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) captures information in 224 spectral channels. The cube on the right represents the detail within the data. At this level of detail, distinctions can be made between rock and mineral types, vegetation types, and other features. In the cube, the small region of high response, in the upper right of the image, is in the red portion of the visible spectrum (about 700 nanometers), and is due to the presence of 1-centimeter-long (half-inch) red brine shrimp in the evaporation pond.
Temporal resolution is the time it takes for a satellite to complete an orbit and revisit the same observation area. This resolution depends on the orbit, the sensor’s characteristics, and the swath width. Because geostationary satellites match the rate at which Earth is rotating, the temporal resolution is much finer, at about 30s - 1min. Polar orbiting satellites have a temporal resolution that can vary from 1 day to 16 days. For example, MODIS has a temporal resolution of 1-2 days, allowing us to visualize Earth as it changes day by day. Landsat, on the other hand, has a narrower swath width and a temporal resolution of 16 days; showing not daily changes but bi-monthly changes.
Why not build a high spatial, spectral and temporal resolution sensor? It is difficult to combine all of the desirable features into one remote sensor; to acquire observations with high spatial resolution (like Landsat) a narrower swath is required, which in turn requires more time between observations of a given area resulting in a lower temporal resolution. Researchers have to make trade-offs. This is why it is very important to understand what type of data is needed for any given area of study. When researching weather, which is very dynamic over time, having a fine temporal resolution is critical. When researching seasonal vegetation changes, a fine temporal resolution may be sacrificed for a higher spectral and/or spatial resolution.
Remote sensing data acquired from instruments aboard satellites require processing before the data are usable by most researchers and applied science users. Most raw, NASA Earth observation satellite data (Level 0, see data processing levels) are processed at Science Investigator-led Processing Systems (SIPS) facilities. All data are processed to at least a Level 1, but most have associated Level 2 (derived geophysical variables) and Level 3 (variables mapped on uniform space-time grid scales) products. Many even have Level 4 products. NASA Earth science data are archived at one of the Distributed Active Archive Centers (DAACs)
Most data are stored in the Hierarchical Data Format (HDF) or the Network Common Data Form (NetCDF) format. Numerous data tools are available to subset, transform, visualize, and export to various other file formats.
Once data are processed, they can be used in a variety of applications, from agriculture to water resources to health and air quality. Any one single sensor will not address all research questions within a given application. Users often need to leverage multiple sensors and data products to address their question, bearing in mind the limitations of data provided by different spectral, spatial, and temporal resolutions.
Creating Satellite Imagery
Many sensors acquire data at different spectral wavelengths. For example, Landsat 8’s band one acquires data at 0.433-0.453 micrometers and MODIS’s band one acquires data at 0.620-0.670 micrometers. Landsat 8 has a total of 11 bands whereas MODIS has 36 bands, all measuring different regions of the electromagnetic spectrum. Bands can be combined to produce imagery of the data to reveal different features in the landscape. Often imagery of data are used to distinguish characteristics of a region being studied or to determine an area of study.
For a true-color (red, blue, green (RGB)) image from Landsat, bands 4, 3, 2 are combined respectively; with the NASA/NOAA joint Suomi National Polar-orbiting Partnership (Suomi NPP) Visible Infrared Imaging Radiometer Suite (VIIRS), a true-color image is Red = Band I1, Green = Band M4, Blue = Band M3. True-color images show Earth as you would see it from above. Other combinations, however, can be used for specific science applications—from flood monitoring to urbanization delineation to vegetation mapping. For example, with VIIRS data, creating a false-color image (R=M11, G=I2, B=I1) is useful for distinguishing burn scars from low vegetation or bare soil, as well as for exposing flooded areas. To see more band combinations from Landsat, check out NASA's Scientific Visualization Studio Landsat Band Remix or the Earth Observatory Many Hues of London article. For other common band combinations, view Earth Observatory’s How to Interpret Common False-Color Images; the article provides common band combinations but also provides insight into interpreting the imagery.
Once data are processed into imagery with varying band combinations, they can aid in resource management decisions and disaster assessment; the imagery just needs to be interpreted. There are a few strategies for getting started (adapted from the Earth Observatory’s How to Interpret a Satellite Image).
- Know the scale — there are different scales based on the spatial resolution of the image and each scale provides different features of importance. For example, when tracking a flood, a detailed, high-resolution view will show which homes and businesses are surrounded by water. The wider landscape view shows which parts of a county or metropolitan area are flooded and perhaps where the water is coming from. An even broader view would show the entire region—the flooded river system or the mountain ranges and valleys that control the flow. A hemispheric view would show the movement of weather systems connected to the floods.
- Look for patterns, shapes and textures — many features are easy to identify based on their pattern or shape. For example, agricultural areas are very geometric in shape, usually circles or rectangles. Straight lines are typically manmade structures, like roads or canals.
- Define colors — when using color to distinguish features, it’s important to know the band combination used in creating the image. True- or natural-color images are basically what we would see with our own eyes if looking down from space. Water absorbs light so typically it appears black or blue; however, sunlight reflecting off the surface might make it appear gray or silver. Sediment can affect water color, making it appear more brown, as can algae, making it appear more green. Vegetation ranges in color depending on the season: in the spring and summer, it’s typically a vivid green; fall may have orange, yellow, and tan; and winter may have more browns. Bare ground is usually some shade of brown; however, it depends on the mineral composition of the sediment. Urban areas are typically gray from the extensive concrete. Ice and snow are white, but so are clouds. It’s important when using color to identify things to use surrounding features to put things in context.
- Consider what you know — having knowledge of the area you are observing aids in the identification of these features. For example, knowing that the area was recently burned by a wildfire can help determine why vegetation may look a bit different.
Different land cover types can be discriminated more readily, by using image classification algorithms. Image classification uses the spectral information of each individual pixel. A program using image classification algorithms can automatically group the pixels in what is called an unsupervised classification. The user can also indicate areas of known land cover type to “train” the program to group those like pixels; this is called a supervised classification. Maps or imagery can also be integrated into a geographical information system (GIS) and then each pixel can be compared with other GIS data, such as census data. For more information on integrating NASA Earth science data into a GIS, check out the Earthdata GIS page.
Satellites also often carry a variety of sensors measuring biogeophysical parameters, such as sea surface temperature, nitrogen dioxide or other atmospheric pollutants, winds, aerosols, and biomass. These parameters can be evaluated through statistical and spectral analysis techniques.
To aid in getting started with applications-based research using remotely-sensed data, Data Pathfinders provide a data product selection guide focused on specific science disciplines and application areas, such as those mentioned above. Pathfinders provide direct links to the most commonly-used datasets and data products from NASA’s Earth science data collections and links to tools which provide varying ways of visualizing or subsetting the data, with the option to save the data in different file formats.
Page Last Updated: Mar 10, 2021 at 9:59 AM EST