Skip to main content

What is Remote Sensing?

Remote sensing is the acquiring of information from a distance. NASA observes Earth and other planetary bodies via remote instruments on space-based platforms (e.g., satellites or spacecraft) and on aircraft that detect and record reflected or emitted energy. Remote instruments, which provide a global perspective and a wealth of data about Earth systems, enable data-informed decision making based on the current and future state of our planet.

For more information, check out the Tech Talk by NASA's Interagency Implementation and Advanced Concepts Team (IMPACT): From Pixels to Products: An Overview of Satellite Remote Sensing.

Orbits

Space-based platforms can be placed in several types of orbits around Earth. The three common classes of orbits are low-Earth orbit (approximately 160 to 2,000 km above Earth), medium-Earth orbit (approximately 2,000 to 35,500 km above Earth), and high-Earth orbit (above 35,500 km above Earth). Platforms orbiting at 35,786 km are at an altitude at which their orbital speed matches the planet's rotation, and are in what is called geosynchronous orbit (GSO). In addition, a platform in GSO directly over the equator will have a geostationary orbit. A geostationary orbit enables a platform to maintain its position directly over the same place on Earth’s surface.

Low-Earth orbit is a commonly used orbit since platforms can follow several orbital tracks around the planet. Polar-orbiting platforms, for example, are inclined nearly 90 degrees to the equatorial plane and travel from pole to pole as Earth rotates. This enables instruments aboard these platforms to acquire data for the entire globe rapidly, including the polar regions. Many polar-orbiting platforms are considered Sun-synchronous, meaning that the platform passes over the same location at the same solar time each cycle. One example of a Sun-synchronous, polar-orbiting platform is NASA’s Aqua, which orbits approximately 705 km above Earth’s surface.

Non-polar low-Earth orbit platforms, on the other hand, do not provide global coverage but instead cover only a partial range of latitudes. The joint NASA/JAXA (Japan Aerospace Exploration Agency) Global Precipitation Measurement (GPM) Core Observatory is an example of a non-Sun-synchronous low-Earth orbit platform. Its orbital track acquires data between 65 degrees north and south latitude from 407 km above the planet.

A medium-Earth orbit platform takes approximately 12 hours to complete an orbit. In 24-hours, the platform crosses over the same two spots on the equator every day. This orbit is consistent and highly predictable. As a result, this is an orbit used by many telecommunications and GPS platforms. One example of a medium-Earth orbit platform constellation is the ESA (European Space Agency) Galileo global navigation satellite system (GNSS), which orbits 23,222 km above Earth.

While both geosynchronous and geostationary platforms orbit at 35,786 km above Earth, geosynchronous platforms have orbits that can be tilted above or below the equator. Geostationary platforms, on the other hand, orbit Earth on the same plane as the equator. These platforms capture identical views of Earth with each observation and provide almost continuous coverage of one area. The joint NASA/NOAA Geostationary Operational Environmental Satellite (GOES) series of weather platforms are in geostationary orbits above the equator.

For more information about orbits, please see NASA Earth Observatory's Catalog of Earth Satellite Orbits.

Observing with the Electromagnetic Spectrum

Electromagnetic energy, produced by the vibration of charged particles, travels in the form of waves through the atmosphere and the vacuum of space. These waves have different wavelengths (the distance from wave crest to wave crest) and frequencies; a shorter wavelength means a higher frequency. Some, like radio, microwave, and infrared waves, have a longer wavelength, while others, such as ultraviolet, x-rays, and gamma rays, have a much shorter wavelength. Visible light sits in the middle of that range of long to shortwave radiation. This small portion of energy is all that the human eye is able to detect. Instrumentation is needed to detect all other forms of electromagnetic energy. NASA instruments utilize the full range of the electromagnetic spectrum to explore and understand processes occurring on Earth and on other planetary bodies.

Diagram of the Electromagnetic Spectrum
Image Caption

Diagram of the electromagnetic spectrum. Credit: NASA Science.

Some waves are absorbed or reflected by atmospheric components, like water vapor and carbon dioxide, while some wavelengths allow for unimpeded movement through the atmosphere; visible light has wavelengths that can be transmitted through the atmosphere. Microwave energy has wavelengths that can pass through clouds, an attribute utilized by instruments aboard many weather and communication platforms.

The primary source of the energy observed by instruments aboard space-based platforms is the Sun. The amount of the Sun's energy that is reflected depends on the roughness of the surface and its albedo, which is how well a surface reflects light instead of absorbing it. Snow, for example, has a very high albedo and reflects up to 90% of incoming solar radiation. The ocean, on the other hand, reflects only about 6% of incoming solar radiation and absorbs the rest. Often, when energy is absorbed, it is re-emitted, usually at longer wavelengths. For example, the energy absorbed by the ocean gets re-emitted as infrared radiation.

All things on Earth reflect, absorb, or transmit energy, the amount of which varies by wavelength. Just as your fingerprint is unique to you, everything on Earth has a unique spectral fingerprint. Researchers can use this information to identify different Earth features as well as different rock and mineral types. The number of spectral bands detected by a given instrument, its spectral resolution, determines how much differentiation a researcher can identify between materials.

For more information on the electromagnetic spectrum, with companion videos, view NASA's Tour of the Electromagnetic Spectrum.

Passive and Active Instruments

Instruments aboard space-based platforms and aboard aircraft use the Sun as a source of illumination or provide their own source of illumination to measure energy that is reflected back. Instruments that use natural energy from the Sun are called passive instruments; those that provide their own source of energy are called active instruments.

image of diagram showing difference between passive and active remote sensing
Image Caption

Diagram of a passive instrument versus an active instrument. Credit: NASA Earthdata.

Passive instruments include different types of radiometers (instruments that quantitatively measure the intensity of electromagnetic radiation in select bands) and spectrometers (devices that are designed to detect, measure, and analyze the spectral content of reflected electromagnetic radiation). Most passive systems used by remote sensing applications operate in the visible, infrared, thermal infrared, and microwave portions of the electromagnetic spectrum. These instruments measure land and sea surface temperature, vegetation properties, cloud and aerosol properties, and other physical attributes. Most passive instruments cannot penetrate dense cloud cover and thus have limitations observing areas like the tropics where dense cloud cover is frequent.

Active instruments include different types of radio detection and ranging (radar) instruments, altimeters, and scatterometers. The majority of active instruments operate in the microwave band of the electromagnetic spectrum, which gives them the ability to penetrate the atmosphere under most conditions. These types of instruments are useful for measuring the vertical profiles of aerosols, forest structure, precipitation and winds, sea surface topography, and ice, among other criteria.

View a list of NASA's Earth science passive and active instruments and check out our Synthetic Aperture Radar (SAR) explainer, which provides specific information on SAR, a type of active instrument.

Resolution

Resolution plays a role in how data from a instrument can be used. Resolution can vary depending on the platform's orbit and instrument design. There are four types of resolution to consider for any dataset—radiometric, spatial, spectral, and temporal.

Radiometric resolution is the amount of information in each pixel, that is, the number of bits representing the energy recorded. Each bit records an exponent of base 2. For example, an 8 bit resolution is 28, which indicates that the instrument has 256 potential digital values (0-255) to store information. Thus, the higher the radiometric resolution, the more values are available to store information, providing better discrimination between even the slightest differences in energy. For example, when assessing water quality, radiometric resolution is necessary to distinguish between subtle differences in ocean color.

radiometric_resolution
Image Caption

Advances in remote sensing technology have significantly improved imagery acquired by instruments aboard space-based platforms. Among the advances are improvements in radiometric resolution, or how sensitive an instrument is to small differences in electromagnetic energy. Instruments with high radiometric resolution can distinguish greater detail and variation in light. Credit: NASA Earth Observatory images by Joshua Stevens, using Landsat data from the U.S. Geological Survey.

Spatial resolution is defined by the size of each pixel within a digital image and the area on Earth's surface represented by that pixel.

This infographic provides information about different spatial resolutions that Earth observation are formatted in, with scaled examples in equivalent metric and degree units.
Image Caption

Different spatial resolutions can be used for specific observational or research needs. This image shows spatial resolution examples for common NASA instrument products and a research scale for which they are best suited (regional, national, continental, etc.). Credit: NASA ESDS.

For example, the majority of the bands observed by the Moderate Resolution Imaging Spectroradiometer (MODIS) have a spatial resolution of 1km; each pixel represents a 1 km x 1km area on the ground. MODIS also includes bands with a spatial resolution of 250 m or 500 m. The finer the resolution (the lower the number), the more detail you can see. In the image below, you can see the difference in pixelation between a 30 m/pixel image (left image), a 100 m/pixel image (center image), and a 300 m/pixel image (right image).

spatial_resolution
Image Caption

Landsat 8 image of Reykjavik, Iceland, acquired July 7, 2019, illustrating the difference in pixel resolution. Credit: NASA Earth Observatory.

Spectral resolution is the ability of an instrument to discern finer wavelengths, that is, having more and narrower bands. Many instruments are considered to be multispectral, meaning they have 3-10 bands. Some instruments have hundreds to even thousands of bands and are considered to be hyperspectral. The narrower the range of wavelengths for a given band, the finer the spectral resolution. For example, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) captures information in 224 spectral channels. The cube below represents the detail within the data. At this level of detail, distinctions can be made between rock and mineral types, vegetation types, and other features. In the cube, the small region of high response in the right corner of the image is in the red portion of the visible spectrum (about 700 nanometers), and is due to the presence of 1-centimeter-long (half-inch) red brine shrimp in the evaporation pond.

The top of the cube is a false-color image made to accentuate the structure in the water and evaporation ponds on the right. The sides of the cube are slices showing the edges of the top in all 224 of the AVIRIS spectral channels. The tops of the sides are in the visible part of the spectrum (wavelength of 400 nanometers), and the bottoms are in the infrared (2,500 nanometers).
Image Caption

The top of the cube is a false-color image made to accentuate the structure in the water and evaporation ponds on the right. The sides of the cube are slices showing the edges of the top in all 224 of the AVIRIS spectral channels. The tops of the sides are in the visible part of the spectrum (wavelength of 400 nanometers), and the bottoms are in the infrared (2,500 nanometers). Credit: NASA Jet Propulsion Laboratory.

Temporal resolution is the time it takes for a space-based platform to complete an orbit and revisit the same observation area. Temporal resolution depends on the orbit, the instrument's characteristics, and the swath width. Because geostationary platforms match the rate at which Earth rotates, the temporal resolution is much finer. Polar orbiting platforms have a temporal resolution that can vary from 1 day to 16 days. For example, the MODIS instrument aboard NASA's Terra and Aqua platforms has a temporal resolution of 1-2 days, allowing the instrument to visualize Earth as it changes day by day. The Operational Land Imager (OLI) aboard the joint NASA/USGS Landsat 8 platform, on the other hand, has a narrower swath width and a temporal resolution of 16 days, meaning the imagery it acquires is best for showing bi-monthly changes.

Why not build an instrument combining high spatial, spectral, and temporal resolution? It is difficult to combine all of the desirable features into one remote instrument. For example, to acquire observations with high spatial resolution (like the OLI, aboard Landsat 8) a narrower swath is required, which requires more time between observations of a given area resulting in a lower temporal resolution. Researchers have to make trade-offs. This is why it is very important to understand the type of data needed for a given area of study. When researching weather, which is dynamic over time, a high temporal resolution is critical. When researching seasonal vegetation changes, on the other hand, a high temporal resolution may be sacrificed for a higher spectral or spatial resolution.

Data Processing, Interpretation, and Analysis

Remote sensing data acquired from instruments aboard satellites require processing before the data are usable by most researchers and applied science users. Most raw NASA Earth observation data (Level 0, see data processing levels) are processed at NASA's Science Investigator-led Processing Systems (SIPS) facilities. All data are processed to at least a Level 1, but most have associated Level 2 (derived geophysical variables) and Level 3 (variables mapped on uniform space-time grid scales) products. Many even have Level 4 products. NASA Earth science data are available fully, openly, and without restriction to data users.

Most data are stored in Hierarchical Data Format (HDF) or Network Common Data Form (NetCDF) format. Numerous data tools are available to subset, transform, visualize, and export to various other file formats.

Once data are processed, they can be used in a variety of applications, from agriculture to water resources to health and air quality. A single instrument will not address all research questions within a given application. Users often need to leverage multiple instruments and data products to address their questions, bearing in mind the limitations of data provided by different spectral, spatial, and temporal resolutions.

Creating Satellite Imagery

Many instruments acquire data at different spectral wavelengths. For example, Band 1 of the OLI aboard Landsat 8 acquires data at 0.433-0.453 micrometers while the MODIS Band 1 acquires data at 0.620-0.670 micrometers. OLI has a total of 9 bands whereas MODIS has 36 bands, all measuring different regions of the electromagnetic spectrum. Bands can be combined to produce imagery of the data to reveal different features in the landscape. Often imagery of data are used to distinguish characteristics of a region being studied or to determine an area of study.

True-color images show Earth as it appears to the human eye. For a Landsat 8 OLI true-color (red, green, blue [RGB]) image, the sensor Bands 4 (Red), 3 (Green), and 2 (Blue) are combined. Other spectral band combinations can be used for specific science applications, such as flood monitoring, urbanization delineation, and vegetation mapping. For example, creating a false-color image from data acquired by the Visible Infrared Imaging Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (Suomi NPP) platform, using bands M11, I2, and I1 is useful for distinguishing burn scars from low vegetation or bare soil as well as for exposing flooded areas. To see more band combinations from Landsat sensors, check out NASA Scientific Visualization Studio's video Landsat Band Remix or the NASA Earth Observatory article Many Hues of London. For other common band combinations, see NASA Earth Observatory's How to Interpret Common False-Color Images, which provides common band combinations along with insight into interpreting imagery.

WhatisRS_BandCombo
Image Caption

Fire scars reflect strongly in Landsat's Band 7, which acquires data in the shortwave infrared range. The fire scar is not visible in the left image, which is a standard true-color image. The fire scar stands out clearly in red in the right image, which is a false-color infrared image. Credit: NASA.

Image Interpretation

Once data are processed into imagery with varying band combinations these images can aid in resource management decisions and disaster assessment. This requires proper interpretation of the imagery. There are a few strategies for getting started (adapted from NASA Earth Observatory article How to Interpret a Satellite Image: Five Tips and Strategies):

  1. Know the scale. There are different scales based on the spatial resolution of the image and each scale provides different features of importance. For example, when tracking a flood, a detailed, high-resolution view will show individual homes and businesses surrounded by water. The wider landscape view shows the parts of a county or metropolitan area that are flooded and perhaps the source of the water. An even broader view would show the entire region—the flooded river system or the mountain ranges and valleys that control the flow. A hemispheric view would show the movement of weather systems connected to the floods.
  2. Look for patterns, shapes and textures. Many features are easy to identify based on their pattern or shape. For example, agricultural areas are generally geometric in shape, usually circles or rectangles. Straight lines are typically human-created structures, like roads or canals.
  3. Define colors. When using color to distinguish features, it's important to know the band combination used in creating the image. True- or natural-color images are created using band combinations that replicate what we would see with our own eyes if looking down from space. Water absorbs light so it typically appears black or blue in true-color images; sunlight reflecting off the water surface might make it appear gray or silver. Sediment can make water color appear more brown, while algae can make water appear more green. Vegetation ranges in color depending on the season: in the spring and summer, it's typically a vivid green; fall may have orange, yellow, and tan; and winter may have more browns. Bare ground is usually some shade of brown, although this depends on the mineral composition of the sediment. Urban areas are typically gray from the extensive use of concrete. Ice and snow are white in true-color imagery, but so are clouds. When using color to identify objects or features, it's important to also use surrounding features to put things in context.
  4. Consider what you know. Having knowledge of the area you are observing aids in the identification of these features. For example, knowing that an area was recently burned by a wildfire can help determine why vegetation may appear different in a remotely-sensed image.

Quantitative Analysis

Different land cover types can be discriminated more readily by using image classification algorithms. Image classification uses the spectral information of individual image pixels. A program using image classification algorithms can automatically group the pixels in what is called an unsupervised classification. The user can also indicate areas of known land cover type to "train" the program to group like pixels; this is called a supervised classification. Maps or imagery can also be integrated into a geographic information system (GIS) and then each pixel can be compared with other GIS data, such as census data. View more information on integrating NASA Earth science data into a GIS.

Space-based platforms also often carry a variety of instruments measuring biogeophysical parameters, such as sea surface temperature, nitrogen dioxide, or other atmospheric pollutants, winds, aerosols, and biomass. These parameters can be evaluated through statistical and spectral analysis techniques.

Data Pathfinders by Topic

To aid in getting started with applications-based research using remotely-sensed data, Earthdata's Data Pathfinders by Topic provide a data product selection guide focused on specific science disciplines and application areas, such as those mentioned above. Topic-driven Data Pathfinders provide direct links to the most commonly-used datasets and data products from NASA's Earth science data collections along with links to tools that provide ways to visualize or subset the data, with the option to save the data in different file formats.