Theoretical Background: Radar Remote Sensing Fundamentals

Author

David Moravec

Published

February 24, 2026

1 Introduction: The Active Microwave Perspective

Radar (Radio Detection and Ranging) remote sensing represents a fundamentally different approach to Earth observation compared to optical and infrared methods. While passive optical sensors rely on reflected sunlight, radar systems actively transmit microwave radiation and measure the backscattered signal (Lillesand, Kiefer, and Chipman 2015). This active nature, combined with the long wavelengths of microwave radiation (ranging from millimeters to meters), provides unique capabilities that overcome many limitations of passive optical remote sensing.

While passive microwave radiometers also exist—measuring naturally emitted thermal microwave radiation from the Earth’s surface—this chapter focuses exclusively on active radar systems. Passive microwave sensors operate similarly to thermal infrared sensors but at longer wavelengths, providing information primarily about surface temperature, soil moisture, and atmospheric water content (Ulaby and Long 2013). However, active radar systems offer distinct advantages: they provide their own controlled illumination, achieve much finer spatial resolution through coherent signal processing, and enable measurement of surface geometric and structural properties through analysis of backscattered signals.

The electromagnetic spectrum shows that microwave radiation occupies the region between radio waves and infrared radiation, with wavelengths ranging from approximately 1 mm to 1 m (Figure 1). Within the atmosphere, specific wavelength regions—called atmospheric windows—allow electromagnetic radiation to propagate with minimal attenuation. The microwave region features broad atmospheric windows that enable radar signals to penetrate clouds, fog, rain, and darkness, providing all-weather, day-and-night imaging capabilities (Ulaby and Long 2013).

Figure 1: Electromagnetic spectrum showing the position of radar bands. The microwave region (highlighted) spans wavelengths from ~1 mm to 1 m, corresponding to frequencies from 300 GHz to 300 MHz. Common radar bands are labeled: Ka, K, Ku, X, C, S, L, and P-band. The lower graph shows atmospheric opacity, with the microwave “radio window” demonstrating excellent atmospheric transmission compared to visible and infrared regions. Source: NASA

Three fundamental advantages distinguish radar from optical remote sensing:

  1. Active illumination: Radar provides its own energy source, enabling observations regardless of solar illumination and at any time of day or night (Henderson and Lewis 1998)
  2. Cloud penetration: Microwave radiation passes through clouds, smoke, and atmospheric moisture that block optical sensors (Woodhouse 2017)
  3. Surface penetration: Longer radar wavelengths can penetrate vegetation canopies and even soil or ice surfaces, revealing subsurface structures and volume properties (Ulaby and Long 2013)

These capabilities make radar particularly valuable for forest monitoring, where cloud cover often obscures optical observations in tropical and temperate regions, and where information about forest structure extends throughout the three-dimensional canopy volume rather than just the top surface visible to optical sensors.

2 The Importance of Forests: Context for Radar Applications

Understanding why radar remote sensing matters for forest monitoring requires first appreciating the critical role forests play in Earth’s systems (Figure 2). Forests are crucial for regulating energy, water, and carbon balance, covering 31% of Earth’s solid surface and 43% of the European Union’s area (Bonan 2008). They represent Europe’s most important renewable source of biomass and deliver livelihoods for 1.6 billion people globally (FAO 2020). Forests provide habitat for the vast majority of terrestrial plant and animal species, making their monitoring essential for biodiversity conservation (FAO and UNEP 2020).

Figure 2: The global importance of forests. Forests regulate energy, water, and carbon cycles; cover substantial portions of Earth’s land surface; provide renewable biomass; support billions of people’s livelihoods; and harbor most terrestrial biodiversity. These multiple roles make forest monitoring through remote sensing a critical application. Source: Dietmar Rabich (License: CC BY-SA 4.0, https://creativecommons.org/licenses/by-sa/4.0/)

Traditional optical remote sensing faces significant challenges in forest environments. Dense tropical forests occur in regions with persistent cloud cover, limiting optical data acquisition to rare cloud-free windows. Even in temperate zones, seasonal cloud patterns can prevent optical observations for weeks or months (Woodhouse 2017). Moreover, optical sensors primarily detect the uppermost canopy layer, providing limited information about understory structure, stem volume, or subsurface moisture (Kasischke et al. 2019).

Radar overcomes these limitations by penetrating clouds and, depending on wavelength, penetrating into and through vegetation canopies. The degree of penetration depends on the radar wavelength: shorter wavelengths (X-band, C-band) primarily interact with leaves and small branches, while longer wavelengths (L-band, P-band) penetrate deeper into canopy structure and interact with larger branches and stems (Figure 8). This wavelength-dependent penetration enables estimation of forest structural parameters including canopy height, biomass, and vertical profile that remain inaccessible to optical methods (Le Toan et al. 2011).

3 Why Microwave Radiation?

The choice of microwave frequencies for radar remote sensing stems from fundamental physical principles governing electromagnetic wave propagation and interaction with Earth’s atmosphere and surface. Figure 3 illustrates the dramatic difference between optical and radar imaging: the left image shows an optical view dominated by clouds obscuring the surface, while the right image demonstrates radar’s ability to see through these clouds to reveal underlying landscape features.

Figure 3: Comparison between optical (left) and radar (right) views of the same cloudy scene. The optical image shows dense cloud cover blocking surface visibility, while the radar image penetrates clouds to reveal surface features including vegetation patterns and topography. This demonstrates radar’s all-weather imaging capability.

The physical basis for this capability lies in the relationship between electromagnetic radiation wavelength and atmospheric interaction. Figure 4 shows the spectral distribution of solar radiation (Sun’s energy at 6000 K) and Earth’s thermal emission (Earth’s energy at 300 K) across the electromagnetic spectrum. The critical observation is that between these natural emission peaks lies a region of minimal natural radiation—the microwave gap (Ulaby and Long 2013).

Figure 4: Energy sources and atmospheric transmission. Top panel shows blackbody radiation curves for the Sun (6000 K) and Earth (300 K), revealing a gap in natural radiation around the microwave region (highlighted in red box). Bottom panel shows atmospheric transmission, demonstrating high transmission in the microwave “radio window” where natural illumination is minimal. This combination makes active radar systems necessary and effective for microwave remote sensing. Data: Simulated blackbody radiation using Planck’s Law (CODATA 2018 constants) and simplified atmospheric transmission model based on US Standard 1976 atmosphere. Solar temperature: 5778 K (IAU 2015). Not measured data.

In this microwave region, there is insufficient natural radiation for passive sensing, necessitating active illumination. However, this same region features excellent atmospheric transmission—electromagnetic waves at microwave frequencies experience minimal absorption by atmospheric water vapor, oxygen, and clouds (Woodhouse 2017). This atmospheric window extends from approximately 1 cm to 1 m wavelength, encompassing the standard radar bands used for remote sensing.

The microwave region offers additional advantages beyond cloud penetration:

  • Reduced atmospheric scattering: Unlike visible light, microwave radiation experiences minimal Rayleigh scattering from air molecules and aerosols (Henderson and Lewis 1998)
  • Minimal ionospheric effects: At frequencies above ~1 GHz (wavelengths shorter than ~30 cm), ionospheric refraction becomes negligible for space-based systems (Meyer, Nicoll, and Doulgeris 2019)
  • Weather independence: Radar imaging quality remains consistent regardless of solar illumination, precipitation (except very heavy rain at shorter wavelengths), fog, or smoke (Woodhouse 2017)

These properties make radar particularly valuable for operational applications requiring guaranteed data acquisition, such as disaster response, military reconnaissance, and continuous monitoring programs where temporal gaps are unacceptable.

4 Side-Looking Radar Geometry

The fundamental geometry of radar imaging systems differs profoundly from optical sensors. While optical sensors typically observe directly beneath the platform (nadir-looking), imaging radar systems employ a side-looking geometry that is essential to their operational principle (Figure 5). This geometric arrangement creates both unique capabilities and specific challenges that shape how radar data must be acquired and interpreted.

Figure 5: SAR geometry and synthetic aperture principle. Left: Side-looking radar imaging geometry illustrating the relationship between sensor height H, incidence angle θᵢ, beamwidth β, slant range R, and ground coverage (swath). The pixel dimensions are defined by pulse length (range resolution) and pulse footprint (along-track resolution). Right: Synthetic aperture concept: coherent integration of radar returns over multiple platform positions creates an effective aperture length L_SA = βR₀, where β is the real antenna beamwidth and R₀ is the range to target. This synthesis enables azimuth resolution much finer than achievable with the physical antenna alone. Source: NASA SAR Handbook (NASA, n.d.)

4.1 Geometric Parameters

The side-looking configuration is defined by several critical geometric parameters (Henderson and Lewis 1998):

  • Slant range (R): The direct distance from the antenna to a target on the ground
  • Ground range: The horizontal distance from the point directly beneath the antenna to the target
  • Look angle (θ): The angle between nadir (vertical) and the line from antenna to target
  • Incidence angle (θi): The angle between the incident radar beam and the perpendicular to the local surface (normal)
  • Depression angle: The angle between horizontal and the radar beam
  • Beamwidth (β): The angular extent of the transmitted radar pulse

The relationship between these parameters determines the spatial resolution and geometric distortions in radar images. The range resolution—the ability to distinguish separate targets along the line of sight—is determined by the radar pulse length:

\[R_{res} = \frac{c \tau}{2 \sin \theta}\]

where c is the speed of light, τ is the pulse duration, and θ is the look angle (Woodhouse 2017). The factor of 2 appears because the radar signal must travel to the target and back.

The azimuth resolution—the ability to distinguish targets along the direction of platform motion—depends on the antenna aperture length (L) and slant range (R):

\[A_{res} = \frac{\lambda R}{L}\]

For real aperture radar systems, achieving fine azimuth resolution requires either very long antennas or short ranges. Synthetic Aperture Radar (SAR) overcomes this limitation by synthesizing a long antenna aperture through platform motion, achieving azimuth resolution given by (Cumming and Wong 2005):

\[A_{SAR} = \frac{L}{2}\]

remarkably independent of range and wavelength. This makes SAR the dominant technology for space-based radar remote sensing, enabling meter-scale resolution from satellite altitudes of hundreds of kilometers.

4.2 Why Side-Looking?

The side-looking geometry is not merely a design choice but a fundamental requirement for imaging radar systems (Henderson and Lewis 1998). If a radar looked straight down (nadir), all targets at equal range would return echoes simultaneously, making it impossible to distinguish their positions laterally. The side-looking geometry resolves this ambiguity by ensuring that targets at different ground positions have different slant ranges, enabling their spatial separation in the resulting image.

Additionally, the oblique illumination creates shadows and highlights topographic features, enhancing the interpretability of terrain and surface structure. The specific choice of look angle represents a trade-off: steeper angles (closer to nadir) reduce geometric distortions and shadow areas but decrease sensitivity to surface roughness, while shallower angles (more oblique) maximize roughness sensitivity but increase geometric distortions and shadowing (Woodhouse 2017).

For vegetation and forestry applications, moderate incidence angles (typically 20-40°) provide an optimal balance: sufficient penetration into canopy structure, sensitivity to volume scattering from branches and leaves, and manageable geometric distortions (Kasischke et al. 2019).

5 Surface Geometry and Scattering Mechanisms

The interaction between radar waves and Earth’s surface is fundamentally governed by the surface’s geometric and dielectric properties. Unlike optical remote sensing, where surface reflectance is largely determined by chemical composition and pigments, radar backscatter is dominated by structural geometry—the size, shape, orientation, and arrangement of scattering elements—and by dielectric constant, which is primarily controlled by moisture content (Ulaby and Long 2013).

Figure 6 illustrates a fundamental principle: surface geometry matters profoundly for radar backscatter. The same objects (buildings, water body, terrain) illuminated from different directions produce dramatically different backscatter patterns. This angular dependence stems from the coherent nature of radar radiation and the resulting interference patterns created by scattering from multiple surface elements (Woodhouse 2017).

Figure 6: Influence of surface geometry on radar backscatter, shown for three different satellite viewing geometries. The scene contains buildings, a water body, and varied terrain. Notice how the same features appear differently depending on illumination direction: buildings may show strong corner reflections or shadows, water appears dark or bright depending on surface roughness and viewing angle, and terrain slope orientation relative to the radar beam significantly affects backscatter intensity. This geometric sensitivity is fundamental to radar remote sensing.

5.1 The Backscattering Coefficient

Quantitatively, radar systems measure the backscattering coefficient (σ°), defined as the radar cross section per unit area (Ulaby and Long 2013):

\[\sigma° = \frac{P_r}{P_t} \frac{(4\pi)^3 R^4}{G^2 \lambda^2 A}\]

where Pr is received power, Pt is transmitted power, R is range, G is antenna gain, λ is wavelength, and A is illuminated area. The backscattering coefficient is typically expressed in decibels:

\[\sigma°_{dB} = 10 \log_{10}(\sigma°)\]

This coefficient depends on surface properties (roughness, dielectric constant, structure), radar parameters (wavelength, polarization, incidence angle), and for vegetated surfaces, on moisture content throughout the scattering volume (Henderson and Lewis 1998).

5.2 Scattering Mechanisms

The physical processes by which surfaces scatter radar energy fall into several categories, each producing characteristic signatures that enable interpretation of surface properties (Figure 7):

Figure 7: Major scattering mechanisms in radar remote sensing. Top row: (a) Specular scattering from smooth surfaces directs energy away from the sensor; (b) Rough surface scattering distributes energy in multiple directions. Middle row: (c) Corner reflector geometry (dihedral scattering) from perpendicular surfaces directs energy strongly back toward the sensor; (d) Volume scattering from vegetation creates multiple scattering paths. Bottom row: (e) Bragg scattering from periodic surface structures shows wavelength-dependent constructive interference when surface periodicity matches radar wavelength.

1. Specular (Mirror-like) Scattering

When the radar wavelength is much larger than surface roughness features, the surface appears smooth and acts like a mirror, reflecting energy away from the sensor (Woodhouse 2017). This produces very low backscatter. Calm water bodies, paved roads, and recently tilled agricultural fields often exhibit specular scattering, appearing dark in radar images.

The Rayleigh roughness criterion defines surface smoothness (Ulaby and Long 2013):

\[h < \frac{\lambda}{32 \cos \theta_i}\]

where h is RMS surface height variation, λ is wavelength, and θi is incidence angle. Surfaces satisfying this criterion produce predominantly specular scattering.

2. Diffuse (Rough Surface) Scattering

When surface roughness features approach or exceed the radar wavelength, scattering becomes diffuse, distributing energy in multiple directions including back toward the sensor (Henderson and Lewis 1998). This creates moderate to high backscatter depending on roughness magnitude and orientation. Natural surfaces like soil, rock, and vegetation typically produce diffuse scattering.

For moderate roughness, the backscattering coefficient increases with surface roughness until saturation occurs when roughness greatly exceeds the wavelength (Fung 1994).

3. Double-Bounce (Dihedral) Scattering

Corner reflector geometry—created by perpendicular or near-perpendicular surfaces—produces extremely strong backscatter through double-bounce reflection (Figure 7 panel c). The radar pulse reflects from one surface onto another, then back toward the sensor, creating backscatter intensities that can be 10-20 dB higher than rough surface scattering (Henderson and Lewis 1998).

Double-bounce mechanisms are characteristic of: - Urban areas: Building walls and ground create vertical-horizontal dihedrals - Flooded vegetation: Water surface and vertical stems create strong double-bounce returns - Tree trunks and ground: In forests, trunk-ground interaction produces significant double-bounce contribution

The mathematical description of double-bounce scattering from dihedrals depends on polarization, but generally produces strong HH and VV returns with distinct phase relationships (Ulaby and Long 2013).

4. Volume Scattering

Complex three-dimensional structures—particularly vegetation canopies—produce volume scattering through multiple interactions within the scattering volume (Figure 7 panel d). Radar energy penetrates into the canopy, scatters from leaves, branches, and stems at multiple levels, with some energy reaching the ground and scattering back through the canopy (Kasischke et al. 2019).

Volume scattering is characterized by: - Depolarization: Multiple scattering randomizes polarization, producing strong cross-polarized returns (HV, VH) - Wavelength dependence: Longer wavelengths penetrate deeper, interacting with larger structural elements - Complex phase behavior: Multiple path lengths create phase diversity that reduces interferometric coherence

For forests, volume scattering dominates at HV polarization and increases with forest density and biomass (Le Toan et al. 2011). The depth of penetration—and thus which structural elements contribute most to backscatter—depends critically on wavelength, as discussed in the following section.

5. Bragg Scattering

When surfaces exhibit periodic structure (waves, sand ripples, furrows), and the periodicity matches the radar wavelength, constructive interference produces enhanced backscatter through Bragg scattering (Woodhouse 2017). The Bragg condition is:

\[\lambda = 2 d \sin \theta_i\]

where d is the surface periodicity spacing. This mechanism is particularly important for ocean wave imaging, where capillary waves and gravity waves satisfying the Bragg condition create strong returns that enable measurement of wave spectra and surface wind fields (Moreira et al. 2013).

5.3 Permittivity and Moisture Effects

Beyond geometry, the dielectric constant (or relative permittivity, εr) profoundly affects radar backscatter. The dielectric constant determines how much electromagnetic energy is reflected versus transmitted at boundaries (Ulaby and Long 2013):

\[\Gamma = \frac{\sqrt{\varepsilon_{r1}} - \sqrt{\varepsilon_{r2}}}{\sqrt{\varepsilon_{r1}} + \sqrt{\varepsilon_{r2}}}\]

where Γ is the reflection coefficient and εr1, εr2 are the dielectric constants of the two media.

Crucially, liquid water has a very high dielectric constant (εr ≈ 80 at microwave frequencies) compared to dry soil (εr ≈ 3-5), dry vegetation (εr ≈ 2-10), and ice (εr ≈ 3) (Ulaby and Long 2013). This means:

  • Wet surfaces reflect strongly (high Γ) → high backscatter
  • Dry surfaces reflect weakly (low Γ) → lower backscatter
  • Moisture content changes cause substantial backscatter variations

This sensitivity to moisture is both an opportunity—enabling soil moisture mapping and vegetation water content estimation—and a challenge, as moisture variations can mask structural changes of interest (Wigneron et al. 2017).

Figure 6 demonstrates these combined effects: the water body’s backscatter changes dramatically with wind conditions (flat water is specular and dark; rough water produces diffuse scattering and appears brighter), buildings show corner reflections or shadows depending on orientation, and terrain slope affects the local incidence angle and thus scattering mechanism.

Understanding these scattering mechanisms is essential for interpreting radar signatures of forests and other land cover types, where multiple mechanisms occur simultaneously and their relative contributions change with sensor parameters and environmental conditions.

6 Wavelength-Dependent Interactions

One of the most fundamental principles governing radar remote sensing of vegetation is that scattering occurs primarily from objects whose size is comparable to or larger than the radar wavelength (Ulaby and Long 2013). This wavelength dependence creates dramatic differences in how various radar frequencies interact with forest canopies, determining which structural elements contribute most to the observed backscatter.

Figure 8 illustrates this principle by showing a single tree structure as “seen” by different radar wavelengths. The progression from left to right—X-band (λ = 3 cm) through L-band (λ = 27 cm) to P-band (λ = 70 cm) and VHF (λ > 3 m)—reveals how longer wavelengths progressively see through smaller structural elements to interact with larger components deeper in the canopy.

Figure 8: Wavelength-dependent penetration and scattering in forest canopies. The same tree structure appears progressively more transparent at longer wavelengths. X-band (3 cm) interacts primarily with leaves and small branches in the upper canopy. L-band (27 cm) penetrates through foliage to interact with larger branches and some trunk components. P-band (70 cm) penetrates to the ground, interacting primarily with large branches and trunks. VHF (>3 m) wavelengths pass through most canopy structure, interacting primarily with trunks and ground. Source: Thuy Le Toan, 6th ESA Advanced Course on Radar Polarimetry 2021.

6.1 Standard Radar Frequency Bands

Radar systems are conventionally designated by letter bands that originated in military secrecy during World War II but now serve as standard nomenclature (Woodhouse 2017). Figure 9 shows the most important bands for Earth observation:

Figure 9: Radar frequency bands and their applications. The microwave spectrum from X-band through P-band is shown with corresponding wavelengths (3-90 cm) and frequencies. Current operational satellites (KOMPSAT-5, PAZ, TerraSAR-X/TanDEM-X at X-band; Radarsat-2, Sentinel-1 at C-band; NISAR at L-band and S-band) and planned missions (BIOMASS at P-band) are indicated. The lower panel shows atmospheric opacity across the electromagnetic spectrum, highlighting the excellent atmospheric transmission in the microwave region that enables all-weather radar observation.

The key bands for vegetation remote sensing are (Kasischke et al. 2019):

  • X-band (~3 cm, ~10 GHz): Interacts with leaves, needles, and very small branches. Limited canopy penetration. Used for high-resolution urban and ice mapping. Systems include TerraSAR-X, TanDEM-X, COSMO-SkyMed.

  • C-band (~6 cm, ~5 GHz): Interacts with leaves, small branches, and crop structure. Moderate canopy penetration. Optimal for agricultural monitoring and widely used operationally. Systems include Sentinel-1 (operational), Radarsat-2, and the retired ERS and Envisat.

  • S-band (~12 cm, ~3 GHz): Intermediate penetration, sensitive to medium branch structure. Less commonly used but planned for NISAR mission.

  • L-band (~24 cm, ~1.5 GHz): Penetrates through foliage to interact with larger branches and trunks. Sensitive to forest structure and biomass. Systems include ALOS PALSAR (retired), ALOS-2, SAOCOM, and the upcoming NISAR mission.

  • P-band (~70 cm, ~450 MHz): Deep penetration to ground level even in dense forests. Interacts primarily with large branches and trunks. Optimal for biomass estimation. ESA’s BIOMASS mission (launch planned) will be the first spaceborne P-band SAR.

6.2 Penetration Depth and Biomass Sensitivity

The penetration depth—the distance that radar energy travels into vegetation before being completely scattered or absorbed—increases with wavelength according to approximately (Le Toan et al. 2011):

\[\delta \propto \frac{\lambda}{k_e}\]

where ke is the extinction coefficient of the vegetation, which depends on biomass density, moisture content, and structural complexity. Empirically, penetration depth scales roughly linearly with wavelength for forest canopies.

This wavelength-dependent penetration creates characteristic sensitivities to forest biomass:

  • Short wavelengths (X, C-band) saturate at low biomass (~20-60 Mg/ha) because scattering occurs entirely in the upper canopy; once this layer is dense, additional biomass below contributes minimally to backscatter (Kasischke et al. 2019).

  • Medium wavelengths (L-band) penetrate deeper and remain sensitive to higher biomass (~100-150 Mg/ha) by interacting with larger structural elements throughout the canopy (Mitchard et al. 2009).

  • Long wavelengths (P-band) penetrate to the ground even in dense tropical forests and maintain sensitivity to biomass approaching 500 Mg/ha, making them essential for global carbon monitoring in high-biomass ecosystems (Le Toan et al. 2011).

Figure 10 shows this principle with actual satellite data: RADARSAT (C-band) and ALOS PALSAR (L-band) images of the same area in Greenland reveal strikingly different surface expressions due to their different interaction depths (Joughin et al. 2016).

Figure 10: Wavelength comparison: RADARSAT C-band (left) and ALOS PALSAR L-band (right) images of Greenland. The C-band image shows primarily surface scattering from ice and snow, with strong sensitivity to surface roughness and moisture. The L-band image penetrates deeper into ice and snow layers, revealing subsurface structures and features invisible to C-band. The difference demonstrates how wavelength choice fundamentally determines what information can be extracted from radar data. Source: Joughin et al. 2016

6.3 Implications for Forest Monitoring

The wavelength-dependent nature of radar-vegetation interaction has profound implications for application selection:

For biomass estimation: L-band or P-band are essential in moderate to high biomass forests, as shorter wavelengths saturate too quickly. Global biomass mapping requires spaceborne L-band or P-band systems (Saatchi et al. 2011).

For deforestation detection: C-band systems like Sentinel-1 are effective because clearcutting removes the scattering canopy, creating large backscatter changes even at wavelengths that see only upper canopy in intact forest (Bouvet et al. 2018).

For crop monitoring: C-band provides optimal sensitivity to crop structure and soil moisture throughout the growing season without excessive penetration (McNairn and Shang 2016).

For forest structure: L-band provides good sensitivity to canopy structure parameters including height, gap fraction, and vertical complexity (Treuhaft et al. 2010).

For subsurface imaging: P-band and VHF enable detection of subsurface features, including archaeological remains beneath forest canopy and ice structure beneath dry snow (Moreira et al. 2013).

7 Polarization: A Multi-Dimensional View

Beyond wavelength, polarization—the orientation of the electromagnetic field—provides an additional dimension for characterizing surface and volume scattering properties. Polarimetric radar systems transmit and receive both horizontally (H) and vertically (V) polarized radiation, enabling measurement of how surfaces and volumes transform polarization through scattering (S. Cloude 2009).

7.1 Polarization Fundamentals

An electromagnetic wave’s polarization describes the orientation of its electric field vector. For radar remote sensing, we conventionally define polarization relative to the plane containing the radar line of sight and the local vertical (Woodhouse 2017):

  • Horizontal (H): Electric field perpendicular to this plane
  • Vertical (V): Electric field parallel to this plane

A radar system can transmit at one polarization and receive at the same or different polarization, creating four possible combinations called polarization channels (Lee and Pottier 2009):

  1. HH: Transmit horizontal, receive horizontal (co-polarized)
  2. VV: Transmit vertical, receive vertical (co-polarized)
  3. HV: Transmit horizontal, receive vertical (cross-polarized)
  4. VH: Transmit vertical, receive horizontal (cross-polarized)

Due to reciprocity (for most natural surfaces), HV = VH, so only three independent measurements are available (S. Cloude 2009).

7.2 Polarimetric Modes

Different radar systems offer various polarimetric capabilities:

  • Single polarization: Transmits and receives only one polarization (HH or VV). Simplest mode, used for routine monitoring. Example: Sentinel-1 in Extra Wide Swath mode (Torres et al. 2012).

  • Dual polarization: Transmits one polarization and receives both (VV+VH or HH+HV). Provides sensitivity to depolarization from volume scattering. Example: Sentinel-1 in Interferometric Wide Swath mode (Torres et al. 2012), Envisat.

  • Alternating polarization:Sswitches between tranmitting H and V and receiving the same polarization (or often both). Example: Envisat.

  • Compact polarimetry: Transmits circular polarization and receives H and V, providing partial polarimetric information with reduced data volume (Raney 2007). Example: RADARSAT-2.

  • Quad polarization (full polarimetry): Transmits both H and V, receives all four combinations, capturing the complete scattering matrix. Provides maximum information but at reduced spatial coverage. Example: RADARSAT-2, ALOS-2 in polarimetric mode (S. Cloude 2009).

7.3 Scattering Behavior by Polarization

Different scattering mechanisms produce characteristic polarimetric signatures (Figure 11):

Figure 11: Polarimetric response of a cylindrical scatterer (representing a tree branch or trunk) at different orientations. Vertical cylinder (left): strong VV return, weak HH return. This demonstrates how polarization enables detection of scatterer geometry and orientation. Oblique cylinder (middle): both HH and VV respond, with depolarization producing HV signal. Horizontal cylinder (right): strong HH return (red), weak VV return (blue), as H-polarized radiation aligns with the long axis. Adapted from: NASA ARSET, https://www.youtube.com/watch?v=cQI49sOSc2g

Surface scattering (smooth or moderately rough surfaces): - Strong co-polarized returns (HH, VV) - Weak cross-polarized returns (HV ≈ 0) - VV typically stronger than HH at moderate incidence angles (Ulaby and Long 2013)

Double-bounce scattering (corner reflectors, ground-trunk interaction): - Very strong HH and VV - Minimal cross-polarization - The relative strength of HH vs VV depends on surface properties (Henderson and Lewis 1998)

Volume scattering (vegetation, rough surfaces, snow): - Strong cross-polarized returns (HV, VH) - Depolarization caused by multiple scattering events randomizing polarization - HV is diagnostic of volume scattering (Kasischke et al. 2019)

The table in Table 1 summarizes the relative scattering strength by mechanism and polarization:

Table 1: Relative scattering strength by polarization for different mechanisms (Lee and Pottier 2009).
Scattering Mechanism HH/VV HV/VH Diagnostic Feature
Rough Surface High Low |SVV| > |SHH| > |SHV|
Double Bounce Very High Very Low |SHH| > |SVV| > |SHV|
Volume Scattering Moderate High Main source of |SHV|

7.4 Forest Applications of Polarimetry

Polarimetric data enable several important applications in forest remote sensing:

1. Forest/Non-Forest Classification

The high HV backscatter from volume scattering in forests contrasts sharply with the low HV from bare soil or water, enabling robust forest mapping even when HH or VV responses are ambiguous (Santoro et al. 2011).

2. Forest Type Discrimination

Different forest types exhibit distinct polarimetric signatures: coniferous forests with vertical trunk structure show enhanced VV, deciduous forests with more horizontal branch orientation favor HH, and forest structure metrics like crown density affect depolarization strength (Lee and Pottier 2009).

3. Biomass Estimation Enhancement

Combining multiple polarizations improves biomass estimation by capturing both surface (HH, VV) and volume (HV) scattering components. Multi-polarization approaches extend the range of biomass sensitivity compared to single-polarization methods (Robinson et al. 2013).

4. Inundation Mapping in Forests

Flooded forests produce distinctive polarimetric signatures: strong HH/VV from double-bounce between water and trunks, combined with moderate HV from remaining canopy volume scattering. This combination allows detection of flooding beneath canopy cover (Henderson and Lewis 2008).

7.5 Polarimetric Decompositions

Advanced polarimetric analysis employs target decomposition methods that mathematically separate the measured backscatter into contributions from different scattering mechanisms (S. R. Cloude and Pottier 1997). Figure 12 shows various decomposition results applied to the same scene.

Figure 12: Polarimetric decomposition methods applied to RADARSAT-2 data. Each decomposition separates the total backscatter into components representing different scattering mechanisms: Pauli decomposition (surface in blue, dihedral in red, volume in green); Freeman-Durden decomposition (separating surface, double-bounce, and volume contributions); H/A/Alpha decomposition (entropy, anisotropy, alpha angle providing scattering mechanism classification). Different decompositions emphasize different aspects of the scattering process. Source: Liu et al. 2021.

Common decompositions include:

  • Pauli decomposition: Separates single-bounce, double-bounce, and volume scattering (Lee and Pottier 2009)
  • Freeman-Durden decomposition: Models scattering as combination of surface, double-bounce, and volume components (Freeman and Durden 1998)
  • Cloude-Pottier (H/A/α) decomposition: Uses entropy, anisotropy, and alpha angle to characterize scattering randomness and mechanism (S. R. Cloude and Pottier 1997)
  • Yamaguchi decomposition: Extends Freeman-Durden with helix scattering term for complex oriented volumes (Yamaguchi et al. 2011)

These decompositions transform polarimetric data into interpretable scattering parameters, enhancing classification accuracy and providing physical insight into surface and volume properties (Lee and Pottier 2009).

8 Geometric Distortions in Radar Imaging

The side-looking geometry of radar systems, combined with the range-based measurement principle, creates characteristic geometric distortions that profoundly affect radar image geometry and require careful correction for quantitative analysis (Henderson and Lewis 1998). Unlike optical images where each pixel represents a fixed ground area, radar pixel spacing in range depends on local surface topography and viewing geometry.

Three primary distortion types affect radar imagery: foreshortening, layover, and shadow. Understanding these effects is essential for radar image interpretation and for determining optimal imaging geometry for specific applications.

8.1 Foreshortening

Foreshortening occurs when terrain slopes toward the radar, causing the slant range distance between the slope base and crest to be compressed relative to the true ground distance (Figure 13 left panel). The effect is analogous to viewing a hillside from an oblique angle: features on the slope appear compressed in the viewing direction (Woodhouse 2017).

Figure 13: Geometric distortions in radar imagery. Left: Foreshortening—the sensor-facing slope AB appears compressed to A’B’ in the image because the slant range difference is smaller than the ground range difference. Center: Layover—when slope exceeds radar look angle, the mountain top C’ is imaged before the base B, causing geometric reversal. Right: Shadow—area behind the mountain is not illuminated by the radar beam, appearing as a signal void (dark area) in the image. Ground range is shown as the horizontal axis; image positioning follows slant range order.

Mathematically, the compression factor for a uniform slope is (Henderson and Lewis 1998):

\[CF = \frac{R_{slant}}{R_{ground}} = \frac{\sin(\theta_i - \alpha)}{\sin(\theta_i)}\]

where θi is the radar incidence angle and α is the terrain slope angle (positive when sloping toward the radar).

Consequences of foreshortening: - Bright returns: Compressed terrain results in energy returned from a larger actual area concentrated into fewer image pixels, increasing pixel brightness - Distorted geometry: Distances and areas are incorrectly represented, affecting measurements - Reduced texture: Surface detail is compressed, potentially limiting interpretation

Foreshortening effects decrease with increasing look angle (more oblique views), as the angular difference between radar look angle and slope angle increases (Woodhouse 2017).

8.2 Layover

When terrain slope exceeds the radar depression angle, layover occurs: the top of the slope is closer to the radar in slant range than the base, causing these features to be imaged in reversed order (Figure 13 center panel). This geometric inversion is unique to radar and has no optical equivalent (Henderson and Lewis 1998).

The layover condition is met when (Woodhouse 2017):

\[\alpha > \theta_i\]

where α is terrain slope and θi is incidence angle.

Characteristics of layover: - Complete geometric reversal: Features at the slope top appear in front of (closer in range than) features at the base - Superposition: Signals from multiple elevations overlap in a single image location, mixing returns from different surface elements - Very bright returns: Multiple scattering surfaces contribute to the same pixels - Interpretation difficulty: Layover regions are essentially uninterpretable; features cannot be reliably identified

?@fig-mountain-distortions shows a dramatic real-world example: mountains in Alaska imaged by SAR exhibit severe foreshortening on the radar-facing slopes (very bright, compressed features) and shadow on the far slopes (no signal, black areas).

Figure 14: Foreshortening and shadow in mountainous terrain. SAR image from Alaska showing mountains with pronounced geometric distortions. The near slope (facing the radar, arriving from the left) shows foreshortening: bright compressed returns where terrain slopes toward the sensor (indicaated by yellow arrows). The far slope shows radar shadow: dark areas where terrain blocks the radar beam (indicated by white arrows). Layover is indicated by red arrows.

Layover increases with decreasing look angle (steeper views), as the probability that terrain slope exceeds radar incidence angle increases. Very steep look angles can cause layover even on moderate slopes.

8.3 Radar Shadow

Radar shadow occurs when terrain blocks the radar beam from illuminating areas behind topographic obstacles (Figure 13 right panel). Unlike optical shadows (caused by blocked sunlight), radar shadows result from blocked radar illumination from the specific side-looking viewing geometry (Henderson and Lewis 1998).

Shadow occurs when (Woodhouse 2017):

\[\alpha_{far} < -\theta_i\]

where αfar is the slope angle on the far side of a topographic feature (negative for slopes facing away from the radar).

Characteristics of radar shadow: - No signal return: Shadowed areas receive no radar illumination, resulting in zero or noise-level backscatter (appear black) - Complete information loss: No surface information can be retrieved from shadowed pixels - Sharp boundaries: Shadow edges are geometrically well-defined by terrain and viewing geometry - Can be useful: Shadow provides strong topographic information and can enhance 3D perception of terrain relief

Shadow extent increases with decreasing look angle (steeper views) and increases with increasing terrain relief. Calculating shadow extent requires detailed topographic information (DEM) and precise orbit geometry (Small 2011).

8.4 Mitigating Geometric Distortions

Several strategies minimize or correct geometric distortions (Henderson and Lewis 1998; Small 2011):

1. Imaging Geometry Selection

  • Moderate incidence angles (30-40°) balance foreshortening (worse at steep angles), layover (worse at steep angles), and shadow (worse at shallow angles)
  • Multiple viewing geometries: Ascending and descending pass combinations image opposite slope faces, ensuring one geometry avoids layover/shadow for each feature
  • Right-side and left-side looking: Some satellites can point the radar beam to either side, providing geometric diversity

2. Terrain Correction

Radiometric terrain correction compensates for local incidence angle effects caused by topography, normalizing backscatter to a reference geometry (Small 2011). This requires: - High-quality Digital Elevation Model (DEM) - Precise orbit and sensor geometry information - Radiometric calibration

The correction adjusts pixel values based on local geometry:

\[\sigma°_{corrected} = \sigma°_{measured} \times \frac{\sin \theta_{ref}}{\sin \theta_{local}}\]

where θlocal is the local incidence angle accounting for topography, and θref is the reference incidence angle (Ulander 1996).

3. Geocoding and Orthorectification

Geometric terrain correction (orthorectification) transforms radar images from slant range geometry to map coordinates, correcting foreshortening and compensating for terrain elevation (Small 2011). This process: - Requires a DEM to determine true ground positions - Corrects pixel spacing for terrain slope - Enables integration with GIS data and other sensors - Cannot recover information from layover or shadow, which remain as distortions

4. Masking Unreliable Areas

For quantitative analysis, pixels affected by severe layover or shadow are typically masked and excluded from analysis, as their values do not reliably represent surface properties (Santoro et al. 2011).

In forest applications, geometric distortions are less severe than in mountainous terrain, as forest canopy topography is gentler. However, even moderate slopes can create significant foreshortening effects that must be corrected for accurate biomass estimation or change detection (Cartus, Santoro, and Kellndorfer 2012).

9 Speckle: Coherent Noise in Radar Images

Unlike optical images where pixel values represent incoherent addition of reflected energy from multiple scatterers within the resolution cell, radar images are formed from coherent summation of returns from all scatterers, preserving phase relationships (Goodman 1976). This coherent imaging process creates speckle—a granular noise pattern that is not sensor noise but rather an intrinsic consequence of coherent detection from random scatterers within a resolution cell.

Figure 15 illustrates the physical origin of speckle. When multiple scatterers within a resolution cell reflect the radar signal, each reflection returns with a specific amplitude (determined by the scatterer’s radar cross section) and phase (determined by the distance traveled). Because radar preserves phase information, these individual returns coherently interfere—adding constructively where phases align and destructively where phases oppose (Lee and Pottier 2009).

Figure 15: Physical origin of radar speckle. (a) The radar signal transmitted to the surface. (b) The radar signal interacts with multiple scatterers (represented by different colored waves) within a single resolution cell. Each scatterer returns a signal with amplitude (length of arrow) and phase (direction of arrow) determined by its size and distance. Vector sum shows coherent addition of returns—phases matter. When phases align, constructive interference produces a large amplitude. When phases are random, vector summation produces smaller net amplitude. (c) Over many resolution cells, this random interference creates the characteristic speckle pattern—bright pixels where phases happened to align constructively, dark pixels where destructive interference occurred. This is not noise but deterministic interference from the specific arrangement of scatterers.

9.1 Speckle Statistics

For a resolution cell containing many independent scatterers with random phases, the resulting intensity follows specific probability distributions (Goodman 1976; Lee 1981):

Single-look intensity: The backscatter intensity I from one image (one “look”) follows a negative exponential (gamma) distribution:

\[p(I) = \frac{1}{\langle I \rangle} \exp\left(-\frac{I}{\langle I \rangle}\right)\]

where ⟨I⟩ is the mean intensity. This distribution has standard deviation equal to the mean, creating a signal-to-noise ratio (SNR) of 1 (0 dB) (Lee and Pottier 2009).

Multi-look intensity: Averaging N independent looks reduces speckle. The multi-look intensity follows a gamma distribution with shape parameter N:

\[p(I_N) = \frac{N^N}{\Gamma(N)\langle I \rangle^N} I^{N-1} \exp\left(-\frac{NI}{\langle I \rangle}\right)\]

The resulting SNR improves to \(\sqrt{N}\) (Lee 1981), making multi-looking the fundamental speckle reduction technique.

9.2 Visual Impact

Figure 15 (c) shows the characteristic appearance of speckle: a scene that should appear uniform (same surface type, in this case forest) exhibits random pixel-to-pixel intensity variation. This “salt and pepper” texture is present throughout radar images, obscuring fine spatial detail and complicating image interpretation and classification (Lee and Pottier 2009).

Key characteristics of speckle (Goodman 1976):

  • Multiplicative: Speckle variance increases proportionally with signal level (unlike additive sensor noise)
  • Spatially correlated: Adjacent pixels show some correlation at scales comparable to the resolution
  • Fully developed: In single-look images, the speckle standard deviation equals the mean
  • Not random noise: Each speckle pattern is deterministic for a given scene and viewing geometry; repeated observations from identical geometry would produce identical speckle (Zebker and Villasenor 1992)

9.3 Speckle Reduction Strategies

Several approaches mitigate speckle’s impact on radar image analysis (Lee and Pottier 2009; Lee 1981):

1. Multi-looking (Spatial Averaging)

Dividing the synthetic aperture into sub-apertures creates multiple independent images (“looks”) of the same scene. Averaging these looks reduces speckle at the cost of degraded spatial resolution (Oliver and Quegan 1991):

\[\text{Speckle reduction} = \sqrt{N_{looks}}\] \[\text{Resolution degradation} = \sqrt{N_{looks}}\]

Typical operational SAR products use 3-5 looks in range and azimuth, trading ~3-5 dB speckle reduction for similar resolution loss (Torres et al. 2012).

2. Spatial Filtering

Adaptive filters apply local averaging weighted by spatial statistics, attempting to smooth speckle while preserving edges and features (Lee 1981; Frost et al. 1982). Common filters include:

  • Lee filter: Adapts averaging based on local coefficient of variation (Lee 1981)
  • Frost filter: Uses exponential kernel weighted by local statistics (Frost et al. 1982)
  • Gamma MAP filter: Maximum a posteriori estimation assuming gamma distribution (Lopes et al. 1990)

These filters improve speckle suppression compared to simple averaging but still trade spatial resolution for speckle reduction.

3. Multi-temporal Averaging

Averaging images from different dates reduces speckle if scene properties remain stable, as speckle patterns differ between acquisitions (Quegan and Yu 2001):

\[\text{Speckle reduction} = \sqrt{N_{dates}}\]

This approach is particularly effective for forest monitoring where temporal change is slow, though care must be taken not to average over genuine changes of interest (Santoro et al. 2011).

4. Transform-Domain Filtering

Sophisticated techniques decompose images into spatial or frequency domains, apply selective filtering, and reconstruct (Deledalle, Denis, and Tupin 2015). These include:

These advanced methods achieve better feature preservation than traditional filters but at higher computational cost (Argenti et al. 2013).

9.4 Speckle Tracking and Offset Tracking

Interestingly, speckle that complicates radiometric analysis can be exploited for motion detection. Speckle tracking (or offset tracking) measures displacement by correlating speckle patterns between repeat-pass images (Strozzi et al. 2002). Because each speckle pattern is unique and deterministic, surface displacement shifts the entire pattern, enabling measurement of:

This technique works even when interferometric coherence is lost, though with lower precision (~1/10 pixel) than interferometry (~1/100 wave) (Strozzi et al. 2002).

9.5 Implications for Forest Monitoring

In forest remote sensing, speckle affects applications differently (Kasischke et al. 2019):

  • Biomass estimation: Speckle introduces estimation uncertainty; multi-looking and filtering are essential for reducing variance in biomass predictions (Santoro et al. 2021)
  • Change detection: Temporal filtering reduces false alarms from speckle-induced differences, but thresholds must account for speckle statistics (Reiche et al. 2015)
  • Classification: Texture measures derived from speckle patterns can improve forest type discrimination (Kumar et al. 2014), turning a challenge into a feature

Understanding speckle as coherent interference rather than random noise is fundamental to developing effective processing strategies and correctly interpreting radar statistics in forest and other land cover applications.

10 Phase Information: Interferometry and Beyond

Unlike optical and infrared sensors that detect only intensity (brightness), radar systems measure both the amplitude and phase of the returned signal (Bamler and Hartl 1998). While amplitude corresponds to backscatter intensity and relates to surface roughness and dielectric properties, phase carries information about the distance traveled by the radar signal. This phase information, preserved through coherent detection, enables a powerful set of techniques collectively known as radar interferometry that can measure surface topography, detect subtle surface displacements, and characterize vegetation structure with extraordinary precision (P. A. Rosen et al. 2000).

10.1 Phase Fundamentals

For a radar signal with wavelength λ, the two-way path to a target at range R creates a phase shift (Bamler and Hartl 1998):

\[\phi = \frac{4\pi R}{\lambda}\]

The factor of 4π (rather than 2π) appears because the signal travels to the target and back, covering distance 2R. Any change in range—whether from topography, surface displacement, or atmospheric effects—creates a corresponding phase change (Hanssen 2001).

Figure 16 illustrates this concept. Two observations of the same point P at slightly different times (t₁ and t₂) show a change in range (ΔR) due to surface deformation. This range change produces a measurable phase difference:

\[\Delta \phi = \frac{4\pi \Delta R}{\lambda}\]

Figure 16: Principle of radar interferometry. Two SAR acquisitions at times t₁ and t₂ image the same point P on the surface. If the surface has moved (deformation ΔR) between acquisitions, the change in range causes a phase difference Δφ between the two measurements. The phase difference is proportional to displacement: Δφ = (4π/λ)ΔR. For typical radar wavelengths (centimeters), phase measurements are sensitive to millimeter-scale displacements. The inset shows the relationship between time, phase, and deformation for two returning signals.

10.2 Interferometric SAR (InSAR)

Interferometric SAR creates an interferogram by multiplying one complex SAR image with the conjugate of another and extracting the phase difference (Bamler and Hartl 1998):

\[\phi_{int} = \arg(S_1 \cdot S_2^*)\]

where S₁ and S₂ are complex SAR images from two acquisitions. The resulting interferometric phase contains contributions from (Hanssen 2001):

\[\phi_{int} = \phi_{flat} + \phi_{topo} + \phi_{defo} + \phi_{atm} + \phi_{noise}\]

where: - φflat: Phase from flat-earth ellipsoid (depends on imaging geometry) - φtopo: Phase from surface topography - φdefo: Phase from surface displacement between acquisitions - φatm: Phase from atmospheric path delay differences - φnoise: Phase decorrelation and noise

?@fig-interferogram shows a striking example: an interferogram of volcanic activity on Fogo Island, Cape Verde ?@fig-phase-volcano. The colored fringes—each cycle from blue through red to blue again represents one wavelength of phase change—map surface deformation with centimeter precision over the entire scene. The concentric fringe pattern centered on the volcano reveals inflation or deflation of the volcanic edifice.

Figure 17: Interferogram of Fogo volcano, Cape Verde. Each complete color cycle (fringe) represents λ/2 displacement in the line of sight direction. The concentric fringe pattern centered on the volcanic peak indicates inflation or deflation of the volcano between SAR acquisitions. The deformation field shows highest rates near the summit and gradual decrease radially outward. Note how a single SAR interferogram maps deformation across the entire scene with millimeter sensitivity—a capability unique to radar interferometry. Google Earth base map shown for geographic context.

10.3 Applications of InSAR

The sensitivity of phase to sub-wavelength changes enables diverse applications (P. A. Rosen et al. 2000; Hanssen 2001):

1. Topographic Mapping

Two SAR acquisitions from slightly different positions (creating a spatial baseline) produce interferometric phase that depends on topography. After removing flat-earth phase, the remaining phase directly relates to elevation (Zebker and Goldstein 1986):

\[h = \frac{\lambda R \sin \theta \Delta\phi_{topo}}{4\pi B_{\perp}}\]

where h is elevation, R is range, θ is look angle, and B is the perpendicular baseline between acquisition positions.

The TanDEM-X mission uses this principle with two satellites flying in close formation, achieving global topographic mapping with 12-meter horizonthal resolution and 2-meter relative vertical accuracy (Krieger et al. 2013).

2. Surface Displacement Measurement

Time-series of SAR acquisitions enable measurement of surface motion through differential interferometry (DInSAR), where topographic phase is subtracted using a DEM, leaving displacement phase (Massonnet et al. 1993):

\[\Delta R = \frac{\lambda \Delta\phi_{defo}}{4\pi}\]

Figure 18 shows a dramatic example: the 2003 Bam, Iran earthquake produced an interferogram with more than 50 fringes near the epicenter, indicating ~1.5 meters of surface displacement (Funning et al. 2005). The complexity of the fringe pattern reveals details of fault geometry and slip distribution.

Figure 18: InSAR measurement of the 2003 Bam earthquake, Iran. Each color cycle represents ~28 mm of surface displacement (C-band wavelength). The extremely dense fringe pattern near the center indicates up to 1.5 m of surface lowering from earthquake-induced subsidence. The circular pattern reveals a buried thrust fault. This interferogram enabled detailed analysis of fault geometry, slip distribution, and earthquake mechanism without ground-based measurements. Source: ESA, credit: Polimi/Poliba.

A key advantage of radar interferometry for earthquake monitoring is the ability to observe surface conditions both before and after seismic events. The interferogram reveals two distinct deformation centers connected by a linear feature—the surface expression of the fault rupture. When tectonic pressure causes the crust to fracture, one block of the Earth’s surface moves upward while the adjacent block subsides. The direction of vertical motion can be determined from the color sequence in the fringe pattern: areas where colors progress from red through yellow to green indicate uplift, while the reverse sequence (green through yellow to red) indicates subsidence. This bidirectional deformation pattern, combined with the spatial distribution of fringes, enables precise determination of the earthquake epicenter location and the geometry of the causative fault structure without requiring ground-based measurements.

Applications include: - Earthquake deformation: Co-seismic and post-seismic displacement fields (Massonnet et al. 1993) - Volcano monitoring: Magma inflation/deflation cycles (Massonnet, Briole, and Arnaud 1995) - Subsidence measurement: Groundwater extraction, mining, permafrost thaw (Galloway and Burbey 2011) - Glacier flow: Ice velocity through offset tracking and interferometry (Rignot et al. 1995) - Landslide monitoring: Slow-moving slope instabilities (Singleton et al. 2014)

3. Operational Ground Motion Services

The European Ground Motion Service provides systematic InSAR-derived displacement measurements across Europe, updating regularly and freely available (Dehls et al. 2019).

4. Forest Structure and Biomass

In vegetated areas, interferometric phase becomes more complex. The phase center—the elevation from which the effective scattering originates—depends on wavelength-dependent penetration depth (Treuhaft and Siqueira 1996):

  • Short wavelengths (X, C-band): Phase center near canopy top
  • Long wavelengths (L, P-band): Phase center within canopy, sensitive to vertical structure

Polarimetric interferometry (PolInSAR) combines polarimetry and interferometry to separate ground and canopy contributions, enabling extraction of forest height, vertical profile, and biomass from the interferometric phase structure (S. R. Cloude and Papathanassiou 2003). ?@fig-polinsar-forest shows forest height derived from this technique for the Sundarbans mangrove forest.

Figure 19: PolInSAR-derived forest height in Sundarbans, India/Bangladesh. The color scale shows canopy height (0-15 m) derived from L-band polarimetric interferometry. The technique exploits how different polarizations penetrate to different canopy depths, creating polarization-dependent phase centers. Modeling these relationships allows inversion for forest height and vertical structure. Rivers appear as gaps (zero height). This demonstrates radar’s unique capability to measure forest structure through the three-dimensional canopy volume. Source: ESA.

5. Soil Moisture Estimation

Changes in soil moisture alter the dielectric constant, changing penetration depth and thus phase. ?@fig-soil-moisture shows an interferogram sensitive to sub-surface moisture changes, where phase variations correlate with irrigation patterns and moisture gradients (De Zan et al. 2014).

Figure 20: InSAR sensitivity to soil moisture. This TanDEM-X interferogram shows phase variations correlated with soil moisture differences across agricultural fields. Wetter soils (higher dielectric constant) create phase shifts through altered penetration depth. The clear field boundaries and systematic phase patterns demonstrate radar’s capability to map moisture variations, though distinguishing moisture from roughness changes requires careful analysis. Source: De Zan et al. 2014

10.4 Coherence and Decorrelation

Interferometry requires that the two SAR images maintain coherence—the speckle patterns must remain sufficiently similar that phase differences are meaningful (Zebker and Villasenor 1992). The interferometric coherence is defined as:

\[\gamma = \frac{|\langle S_1 S_2^* \rangle|}{\sqrt{\langle |S_1|^2 \rangle \langle |S_2|^2 \rangle}}\]

ranging from 0 (completely decorrelated, phase meaningless) to 1 (perfectly coherent, phase precise) (Bamler and Hartl 1998).

Decorrelation sources include: - Temporal decorrelation: Surface changes (vegetation growth, snow, moisture) between acquisitions (Zebker and Villasenor 1992) - Geometric decorrelation: Different viewing angles create different scattering paths (Bamler and Hartl 1998) - Volume decorrelation: In vegetation, scatterers distributed through depth create phase diversity (Treuhaft and Siqueira 1996) - Processing errors: Co-registration errors, phase unwrapping mistakes (Hanssen 2001)

For vegetation, temporal coherence decays rapidly (days to weeks at C-band) due to wind-induced motion, growth, and moisture changes (Wegmüller and Werner 1995). This limits repeat-pass interferometry over forests, though very short temporal baselines (1-day for Sentinel-1 over Europe) or longer wavelengths (L-band maintains coherence longer) mitigate this issue (Santoro et al. 2021).

10.5 Phase Unwrapping

Measured interferometric phase is wrapped into the range [-π, +π], creating 2π ambiguities (Ghiglia and Pritt 1998). Phase unwrapping resolves these ambiguities to recover the continuous phase field representing physical displacement or topography (Goldstein, Zebker, and Werner 1988). This challenging inverse problem requires:

  • High coherence (phase quality)
  • Sufficient spatial sampling (fringe density not exceeding Nyquist)
  • Reliable algorithms to identify and integrate phase gradients correctly (C. W. Chen and Zebker 2001)

Unwrapping failures create errors that propagate through the phase field, potentially corrupting large image regions (Bamler and Hartl 1998). Quality-guided algorithms and statistical cost-flow approaches improve robustness (Goldstein, Zebker, and Werner 1988; C. W. Chen and Zebker 2001).

11 Future Radar Missions and Emerging Capabilities

The coming years will see a dramatic expansion in spaceborne radar capabilities, driven by missions specifically designed for vegetation monitoring, systematic change detection, and advanced interferometric techniques (Moreira et al. 2013). These next-generation systems will provide unprecedented temporal resolution, wavelength diversity, and measurement sophistication for forest and ecosystem applications.

11.1 Near-Term Operational Missions

NISAR (NASA-ISRO SAR) (P. Rosen and Kumar 2017): Planned launch 2024, this joint NASA-ISRO mission will carry dual-frequency (L-band and S-band) SAR with 12-day repeat cycle and systematic global coverage. Key capabilities: - L-band (24 cm): Full polarimetry, 6-12 day repeat for biomass and forest structure - S-band (9 cm): Dual polarimetry for cropland and soil moisture monitoring - 240 km swath: Near-global coverage every 12 days enables change detection and time series analysis

Figure 21 shows the NISAR spacecraft with its large deployable antenna. NISAR’s systematic coverage and open data policy will revolutionize forest monitoring by providing consistent, high-quality L-band data globally (P. Rosen and Kumar 2017).

Figure 21: NISAR spacecraft rendering showing the large deployable L-band antenna (12-meter diameter) and S-band antenna. The mission will provide systematic global SAR coverage every 12 days with full polarimetric capability at L-band. NISAR specifically targets ecosystem structure, biomass, and change detection applications, with open data access enabling operational forest monitoring. Source: NASA/JPL-Caltech

BIOMASS (ESA) (Le Toan et al. 2011): ESA’s first P-band spaceborne SAR, planned launch 2024, specifically designed for forest biomass estimation. Key innovations: - P-band (70 cm): Unprecedented penetration to ground even in dense tropical forests - Polarimetric and interferometric modes: Enable structure and height estimation - Tomographic capability: 3D imaging of forest vertical structure through repeated passes - Global forest mapping: Primary mission goal is global biomass estimation for carbon accounting

?@fig-biomass-mission shows the BIOMASS spacecraft with its distinctive large reflector antenna required for P-band operation. This mission will address the critical need for accurate tropical forest biomass estimates (Le Toan et al. 2011).

Figure 22: BIOMASS mission spacecraft showing the large deployable P-band reflector antenna. The 70-cm wavelength requires a substantial antenna (12-meter diameter) but provides penetration through dense forest canopies to the ground, enabling biomass estimation in high-biomass tropical forests where shorter wavelengths saturate. BIOMASS will provide the first spaceborne P-band SAR data for forest monitoring. Source: ESA

11.2 Advanced SAR Techniques for Forests

SAR Tomography (TomoSAR): By combining multiple SAR acquisitions from slightly different viewing angles (perpendicular baseline diversity), tomographic SAR reconstructs the three-dimensional distribution of scatterers (Reigber and Moreira 2000). Figure 23 illustrates the principle: multiple flight tracks create a synthetic aperture in the vertical dimension, enabling resolution of scatterer height.

Figure 23: SAR tomography principle and results. Left: Multiple SAR acquisitions from different flight tracks (or satellite orbits with varying baselines) create a synthetic aperture in the elevation dimension. Right: Resulting vertical reflectivity profiles show distinct scattering layers—ground, mid-canopy, and upper canopy—enabling 3D forest structure reconstruction. Horizontal and vertical slices through the tomographic volume reveal spatial patterns of vertical structure. Vertical point profiles show reflectivity as a function of height above ground. Source: EO-College SAR Tomography Tutorial

For forests, TomoSAR provides: - Vertical profile: Scattering intensity as function of height, related to biomass distribution (Tebaldini 2012) - Ground-canopy separation: Unambiguous identification of ground and vegetation returns (Tebaldini and Rocca 2010) - Understory detection: Sensitivity to understory vegetation beneath dominant canopy (Banda et al. 2016) - 3D structure metrics: Canopy complexity, layering, gaps (Minh et al. 2016)

Operational implementation requires: - Multiple baselines: At least 5-10 acquisitions with baseline diversity (Reigber and Moreira 2000) - High temporal coherence: Vegetation must remain stable across all acquisitions - Sophisticated processing: 3D focusing algorithms and parameter estimation (Fornaro, Serafino, and Soldovieri 2005)

Differential Tomography: Combining TomoSAR with temporal series enables measurement of 3D structure changes, potentially detecting selective logging or degradation that leaves the upper canopy intact but alters vertical profile (Tebaldini et al. 2020).

Polarimetric Tomography: Integrating polarimetry with tomography provides polarization-dependent vertical profiles, enabling discrimination of scattering mechanisms (ground, trunk, branch, leaf) at different heights (S. R. Cloude 2006).

See a dedicated chapter on TomoSAR in this e-learning materials.

11.3 Emerging Commercial Constellations

Beyond government missions, several commercial SAR constellations are emerging (Moreira et al. 2013):

Capella Space: First commercial SAR constellation providing sub-meter resolution X-band imagery on-demand.

ICEYE: Constellation of small X-band SAR satellites enabling rapid revisit and video-like monitoring.

While these commercial systems primarily target defense, maritime, and infrastructure monitoring, their high temporal resolution (potentially daily or better) could enable new forest monitoring applications, particularly for rapid deforestation detection or disaster response.

12 Physical and Semi-Empirical Models

Quantitative interpretation of radar signatures for forest parameters requires models linking backscatter to biophysical properties. These range from purely empirical statistical relationships to physical models solving Maxwell’s equations for electromagnetic scattering from complex forest structures (Ulaby and Long 2013).

12.1 The Water Cloud Model (WCM)

The simplest physically motivated model represents vegetation as a uniform cloud of water droplets above a soil surface (Attema and Ulaby 1978). The total backscatter is:

\[\sigma° = \sigma°_{veg} + \tau^2 \sigma°_{soil}\]

where σ°veg is vegetation volume scattering, σ°soil is soil surface scattering, and τ² is two-way transmissivity through the vegetation layer (Attema and Ulaby 1978).

The vegetation component is:

\[\sigma°_{veg} = A V_1 (1 - \tau^2) \cos \theta\]

and transmissivity is:

\[\tau^2 = \exp(-2 B V_2 \sec \theta)\]

where V₁ and V₂ are vegetation descriptors (often the same, e.g., LAI or biomass), A and B are fitting parameters, and θ is incidence angle (Bindlish and Barros 2000).

Despite its simplicity, WCM provides reasonable backscatter predictions for crops and grasslands and has been extended for forest applications by including stem and ground interaction terms (Kumar et al. 2016).

12.2 The Michigan Microwave Canopy Scattering Model (MIMICS)

MIMICS represents vegetation as discrete scatterers (leaves, branches, trunks) with specific sizes, orientations, and distributions, computing first-order scattering from each component plus ground interaction (Ulaby et al. 1990). The model:

  • Discretizes canopy: Divides vegetation into layers and scatterer types
  • Computes scattering: Uses scattering theory for cylinders (branches), discs (leaves), ground
  • Sums contributions: Integrates over canopy depth accounting for attenuation

MIMICS requires detailed canopy structure inputs (height, density, orientation distributions) but provides polarimetric backscatter predictions without empirical calibration (Ulaby et al. 1990). This physical basis enables: - Sensitivity analysis: Determining which parameters affect backscatter most - Inversion: Retrieving canopy parameters from observed backscatter - Mission planning: Simulating expected performance of new sensors

12.3 The Discrete Scattering Model

More sophisticated models treat each tree as an assembly of discrete scatterers (trunk, branches at different hierarchies, leaves) and compute coherent scattering through the entire structure (Karam, Fung, and Antar 1988). These models:

  • Preserve phase: Enable interferometric simulation and coherence prediction
  • Handle complex geometry: Realistic tree architectures from growth models
  • Predict polarimetry: Full scattering matrix for all polarizations

12.4 Model-Based Inversion

These models enable model-based inversion: estimating forest parameters by fitting model predictions to observed backscatter (E. Chen et al. 2016). The inversion minimizes:

\[\min_{\mathbf{p}} ||\sigma°_{obs} - \sigma°_{model}(\mathbf{p})||^2\]

where p is the parameter vector (height, biomass, LAI, etc.), σ°obs is observed backscatter, and σ°model is model-predicted backscatter (Lucas et al. 2006).

Challenges include: - Ill-posedness: Multiple parameter combinations can produce similar backscatter (E. Chen et al. 2016) - Model assumptions: Real forests don’t perfectly match model assumptions - Computational cost: Physics-based models are slow; inversion requires many iterations

Increasingly, machine learning approaches are replacing model-based inversion, using models to generate training data but learning flexible empirical relationships that can incorporate ancillary data and handle model discrepancies (Shen et al. 2019).

13 Conclusion: The Radar Remote Sensing Toolbox

This theoretical overview has introduced the fundamental principles underlying radar remote sensing of forests and landscapes. The unique characteristics of microwave radiation—atmospheric penetration, surface and volume penetration, sensitivity to structure and moisture, and coherent detection preserving phase—create capabilities fundamentally complementary to optical remote sensing.

Key principles to remember:

  1. Active illumination provides all-weather, day-night capability independent of solar illumination
  2. Wavelength-dependent penetration determines which structural elements are sensed, from leaves (X-band) to trunks (P-band)
  3. Polarization reveals scattering mechanisms, distinguishing surface, double-bounce, and volume scattering
  4. Geometric effects (foreshortening, layover, shadow) require careful consideration in image interpretation and geometric correction
  5. Speckle is an inherent characteristic of coherent imaging, requiring filtering or multi-temporal averaging for radiometric applications
  6. Phase information enables interferometry for topography, displacement, and forest structure measurement with extraordinary precision

The coming generation of radar missions—NISAR’s systematic L-band coverage, BIOMASS’s pioneering P-band measurements, commercial constellation’s rapid revisit—will provide unprecedented capabilities for forest monitoring. Combining these with advanced techniques like tomography, multi-temporal analysis, and machine learning will enable operational mapping of forest structure, biomass, and change at scales and precisions previously unattainable.

Understanding the physical principles governing radar-vegetation interaction—as presented in this theoretical foundation—is essential for developing robust methods, correctly interpreting results, and advancing the science of radar remote sensing for Earth observation.

14 References

Argenti, Fabrizio, Alessandro Lapini, Tiziano Bianchi, and Luciano Alparone. 2013. “A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images.” IEEE Geoscience and Remote Sensing Magazine 1 (3): 6–35. https://doi.org/10.1109/MGRS.2013.2277512.
Attema, Evert PW, and Fawwaz T Ulaby. 1978. “Vegetation Modeled as a Water Cloud.” Radio Science 13 (2): 357–64. https://doi.org/10.1029/RS013i002p00357.
Bamler, Richard, and Philipp Hartl. 1998. “Synthetic Aperture Radar Interferometry.” Inverse Problems 14 (4): R1–54. https://doi.org/10.1088/0266-5611/14/4/001.
Banda, Francesco, Davide Giudici, Thuy Le Toan, Mauro Mariotti d’Alessandro, Kostas Papathanassiou, Shaun Quegan, Georg Riembauer, et al. 2016. “Forest Height Retrieval from Commercial x-Band SAR Products.” IEEE Transactions on Geoscience and Remote Sensing 54 (6): 3503–16. https://doi.org/10.1109/TGRS.2016.2520904.
Bindlish, Rajat, and Ana P Barros. 2000. “Multifrequency Soil Moisture Inversion from SAR Measurements with the Use of IEM.” Remote Sensing of Environment 71 (1): 67–88. https://doi.org/10.1016/S0034-4257(99)00065-6.
Bonan, Gordon B. 2008. “Forests and Climate Change: Forcings, Feedbacks, and the Climate Benefits of Forests.” Science 320 (5882): 1444–49. https://doi.org/10.1126/science.1155121.
Bouvet, Alexandre, Stéphane Mermoz, Maxime Ballère, Thierry Koleck, and Thuy Le Toan. 2018. “SAR Data for Tropical Forest Disturbance Alerts: A Comparison with Optical Data.” Remote Sensing of Environment 205: 31–48. https://doi.org/10.1016/j.rse.2017.11.003.
Cartus, Oliver, Maurizio Santoro, and Josef Kellndorfer. 2012. “Mapping Forest Aboveground Biomass in the Northeastern United States with ALOS PALSAR Dual-Polarization l-Band.” Remote Sensing of Environment 124: 466–78. https://doi.org/10.1016/j.rse.2012.05.029.
Chen, Curtis W, and Howard A Zebker. 2001. “Two-Dimensional Phase Unwrapping with Use of Statistical Models for Cost Functions in Nonlinear Optimization.” Journal of the Optical Society of America A 18 (2): 338–51. https://doi.org/10.1364/JOSAA.18.000338.
Chen, Erxue, Zengyuan Li, Xin Tian, and Shengxiang Yu. 2016. “Retrieval of Forest Height in Mountainous Regions Using Tandem-x Data.” Journal of Mountain Science 13 (11): 1937–47. https://doi.org/10.1007/s11629-016-3916-1.
Cloude, Shane. 2009. Polarisation: Applications in Remote Sensing. Oxford: Oxford University Press.
Cloude, Shane R. 2006. “Polarization Coherence Tomography.” In 2006 IEEE International Symposium on Geoscience and Remote Sensing, 2507–10. IEEE. https://doi.org/10.1109/IGARSS.2006.657.
Cloude, Shane R, and Konstantinos P Papathanassiou. 2003. “Three-Stage Inversion Process for Polarimetric SAR Interferometry.” IEE Proceedings-Radar, Sonar and Navigation 150 (3): 125–34. https://doi.org/10.1049/ip-rsn:20030449.
Cloude, Shane R, and Eric Pottier. 1997. “An Entropy Based Classification Scheme for Land Applications of Polarimetric SAR.” IEEE Transactions on Geoscience and Remote Sensing 35 (1): 68–78. https://doi.org/10.1109/36.551935.
Cumming, Ian G, and Frank H Wong. 2005. Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation. Boston: Artech House.
De Zan, Francesco, Alessandro Parizzi, Pau Prats-Iraola, and Paco López-Dekker. 2014. “A SAR Interferometric Model for Soil Moisture.” IEEE Transactions on Geoscience and Remote Sensing 52 (1): 418–25. https://doi.org/10.1109/TGRS.2013.2241069.
Dehls, John, Yngvar Larsen, Petar Marinkovic, Bjørn Lund, Gyula Grenerczy, Juha Ahola, Gunter Liebsch, et al. 2019. “EUROPEAN GROUND MOTION SERVICE – Service Implementation Plan and Product Specification Document.” Copernicus Land Monitoring Service. https://land.copernicus.eu/user-corner/technical-library/egms-specification-and-implementation-plan.
Deledalle, Charles-Alban, Loïc Denis, and Florence Tupin. 2015. “MuLoG, or How to Apply Gaussian Denoisers to Multi-Channel SAR Speckle Reduction?” IEEE Transactions on Image Processing 24 (3): 1048–59. https://doi.org/10.1109/TIP.2015.2389710.
Denis, Loïc, Florence Tupin, Jérôme Darbon, and Marc Sigelle. 2006. “SAR Image Filtering Based on the Heavy-Tailed Rayleigh Model.” IEEE Transactions on Image Processing 15 (9): 2686–93. https://doi.org/10.1109/TIP.2006.877362.
FAO. 2020. “Global Forest Resources Assessment 2020: Main Report.” Rome: Food; Agriculture Organization of the United Nations. https://doi.org/10.4060/ca8753en.
FAO and UNEP. 2020. “The State of the World’s Forests 2020: Forests, Biodiversity and People.” Rome: Food; Agriculture Organization of the United Nations; United Nations Environment Programme. https://doi.org/10.4060/ca8642en.
Fornaro, Gianfranco, Francesco Serafino, and Francesco Soldovieri. 2005. “Three-Dimensional Multipass SAR Focusing: Experiments with Long-Term Spaceborne Data.” IEEE Transactions on Geoscience and Remote Sensing 43 (4): 702–14. https://doi.org/10.1109/TGRS.2005.843567.
Freeman, Anthony, and Stephen L Durden. 1998. “A Three-Component Scattering Model for Polarimetric SAR Data.” IEEE Transactions on Geoscience and Remote Sensing 36 (3): 963–73. https://doi.org/10.1109/36.673687.
Frost, Victor S, Josephine Abbott Stiles, K Sam Shanmugan, and Jack C Holtzman. 1982. “A Model for Radar Images and Its Application to Adaptive Digital Filtering of Multiplicative Noise.” IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 2: 157–66. https://doi.org/10.1109/TPAMI.1982.4767223.
Fung, Adrian K. 1994. Microwave Scattering and Emission Models and Their Applications. Boston: Artech House.
Funning, Gareth J, Barry Parsons, Tim J Wright, James A Jackson, and Eric J Fielding. 2005. “Surface Displacements and Source Parameters of the 2003 Bam (Iran) Earthquake from Envisat Advanced Synthetic Aperture Radar Imagery.” Journal of Geophysical Research: Solid Earth 110 (B9). https://doi.org/10.1029/2004JB003338.
Galloway, Devin L, and Thomas J Burbey. 2011. “Review: Regional Land Subsidence Accompanying Groundwater Extraction.” Hydrogeology Journal 19 (8): 1459–86. https://doi.org/10.1007/s10040-011-0775-5.
Ghiglia, Dennis C, and Mark D Pritt. 1998. Two-Dimensional Phase Unwrapping: Theory, Algorithms, and Software. New York: Wiley.
Goldstein, Richard M, Howard A Zebker, and Charles L Werner. 1988. “Satellite Radar Interferometry: Two-Dimensional Phase Unwrapping.” Radio Science 23 (4): 713–20. https://doi.org/10.1029/RS023i004p00713.
Goodman, Joseph W. 1976. “Some Fundamental Properties of Speckle.” Journal of the Optical Society of America 66 (11): 1145–50. https://doi.org/10.1364/JOSA.66.001145.
Hanssen, Ramon F. 2001. Radar Interferometry: Data Interpretation and Error Analysis. Dordrecht: Springer Science & Business Media.
Henderson, Floyd M, and Anthony J Lewis. 1998. Manual of Remote Sensing: Principles and Applications of Imaging Radar. 3rd ed. Vol. 2. New York: John Wiley & Sons.
———. 2008. “Radar Detection of Wetland Ecosystems: A Review.” International Journal of Remote Sensing 29 (20): 5809–35. https://doi.org/10.1080/01431160801958405.
Joughin, Ian, Benjamin E Smith, Ian M Howat, Twila Moon, and Ted A Scambos. 2016. “A SAR Record of Early 21st Century Change in Greenland.” Journal of Glaciology 62 (231): 62–71. https://doi.org/10.1017/jog.2016.10.
Karam, Mohamed A, Adrian K Fung, and Yahia MM Antar. 1988. “Electromagnetic Scattering from a Layer of Finite Length, Randomly Oriented, Dielectric, Circular Cylinders over a Rough Interface with Application to Vegetation.” International Journal of Remote Sensing 9 (6): 1109–34. https://doi.org/10.1080/01431168808954918.
Kasischke, Eric S, Scott J Goetz, Matthew C Hansen, et al. 2019. “Forest Applications.” In SAR Handbook: Comprehensive Methodologies for Forest Monitoring and Biomass Estimation, edited by Ana I Flores-Anderson et al., 56–89. NASA.
Krieger, Gerhard, Alberto Moreira, Hauke Fiedler, Irena Hajnsek, Marian Werner, Marwan Younis, and Manfred Zink. 2013. “TanDEM-x: A Radar Interferometer with Two Formation-Flying Satellites.” Acta Astronautica 89: 83–98. https://doi.org/10.1016/j.actaastro.2013.03.008.
Kumar, Prashant, Rajendra Prasad, Ajeet Choudhary, Vinay Kumar Mishra, Dharmendra Kumar Gupta, and Ajay Kumar Singh. 2016. “A Water Cloud Model for the Estimation of Soybean Leaf Area Index Using c-Band Dual-Polarization SAR Data.” International Journal of Remote Sensing 37 (5): 1048–60. https://doi.org/10.1080/01431161.2016.1142685.
Kumar, Prashant, Rajendra Prasad, Dharmendra Kumar Gupta, Vinay Kumar Mishra, Ashish Kumar Vishwakarma, Vaibhav Pratap Yadav, Rajeev Bala, Ajeet Choudhary, and Ram Avtar. 2014. “A Multicriteria Decision Making Approach for Precision Agriculture.” International Journal of Remote Sensing 35 (6): 2119–45. https://doi.org/10.1080/01431161.2014.880577.
Le Toan, Thuy, Shaun Quegan, Malcolm WJ Davidson, Heiko Balzter, Philippe Paillou, Kostas Papathanassiou, Stephen Plummer, et al. 2011. “Relating Forest Biomass to SAR Data.” IEEE Transactions on Geoscience and Remote Sensing 49 (2): 1129–50. https://doi.org/10.1109/TGRS.2010.2054138.
Lee, Jong-Sen. 1981. “Refined Filtering of Image Noise Using Local Statistics.” Computer Graphics and Image Processing 15 (4): 380–89. https://doi.org/10.1016/S0146-664X(81)80018-4.
Lee, Jong-Sen, and Eric Pottier. 2009. Polarimetric Radar Imaging: From Basics to Applications. Boca Raton, Florida: CRC Press.
Lillesand, Thomas M, Ralph W Kiefer, and Jonathan W Chipman. 2015. Remote Sensing and Image Interpretation. 7th ed. New York: John Wiley & Sons.
Lopes, Armand, Emmanuel Nezry, Ridha Touzi, and Henri Laur. 1990. “Adaptive Speckle Filters and Scene Heterogeneity.” IEEE Transactions on Geoscience and Remote Sensing 28 (6): 992–1000. https://doi.org/10.1109/36.62623.
Lucas, Richard M, Nick Cronin, Angela Lee, Mahta Moghaddam, Christoph Witte, and Peter Tickle. 2006. “The Integration of Optical and SAR Imagery for Forest Monitoring in the Amazon.” Photogrammetric Engineering & Remote Sensing 72 (2): 169–79. https://doi.org/10.14358/PERS.72.2.169.
Massonnet, Didier, Pierre Briole, and Alain Arnaud. 1995. “Deflation of Mount Etna Monitored by Spaceborne Radar Interferometry.” Nature 375 (6532): 567–70. https://doi.org/10.1038/375567a0.
Massonnet, Didier, Marc Rossi, C Carmona, F Adragna, Gilles Peltzer, Kurt Feigl, and Thierry Rabaute. 1993. “The Displacement Field of the Landers Earthquake Mapped by Radar Interferometry.” Nature 364 (6433): 138–42. https://doi.org/10.1038/364138a0.
McNairn, Heather, and Jiali Shang. 2016. “The Application of c-Band Polarimetric SAR for Agriculture: A Review.” Canadian Journal of Remote Sensing 42 (5): 431–49. https://doi.org/10.1080/07038992.2016.1194564.
Meyer, Franz J, Jeffrey Nicoll, and Anthony P Doulgeris. 2019. “SAR Processing for Forest Biophysical Parameters Retrieval.” Remote Sensing 11 (18): 2091. https://doi.org/10.3390/rs11182091.
Michel, Rémi, Jean-Philippe Avouac, and Jacques Taboury. 1999. “Measuring Ground Displacements from SAR Amplitude Images: Application to the Landers Earthquake.” Geophysical Research Letters 26 (7): 875–78. https://doi.org/10.1029/1999GL900138.
Minh, Dinh Ho Tong, Thuy Le Toan, Fabio Rocca, Stefano Tebaldini, Ludovic Villard, Maxime Réjou-Méchain, Oliver L Phillips, et al. 2016. “SAR Tomography for the Retrieval of Forest Biomass and Height: Cross-Validation at Two Tropical Forest Sites in French Guiana.” Remote Sensing of Environment 175: 138–47. https://doi.org/10.1016/j.rse.2015.12.037.
Mitchard, Edward TA, Sassan S Saatchi, Iain H Woodhouse, Grace Nangendo, Nuno S Ribeiro, Mathew Williams, Casey M Ryan, Simon L Lewis, Ted R Feldpausch, and Patrick Meir. 2009. “Using Satellite Radar Backscatter to Predict Above-Ground Woody Biomass: A Consistent Relationship Across Four Different African Landscapes.” Geophysical Research Letters 36 (23). https://doi.org/10.1029/2009GL040692.
Moreira, Alberto, Pau Prats-Iraola, Marwan Younis, Gerhard Krieger, Irena Hajnsek, and Konstantinos P Papathanassiou. 2013. “A Tutorial on Synthetic Aperture Radar.” IEEE Geoscience and Remote Sensing Magazine 1 (1): 6–43. https://doi.org/10.1109/MGRS.2013.2248301.
Oliver, Chris, and Shaun Quegan. 1991. “Optimum Edge Detection in SAR.” IEE Proceedings F (Radar and Signal Processing) 138 (2): 107–14. https://doi.org/10.1049/ip-f-2.1991.0016.
Quegan, Shaun, and Jiong Jiong Yu. 2001. “Filtering of Multichannel SAR Images.” IEEE Transactions on Geoscience and Remote Sensing 39 (11): 2373–79. https://doi.org/10.1109/36.964973.
Raney, R Keith. 2007. “Hybrid-Polarity SAR Architecture.” IEEE Transactions on Geoscience and Remote Sensing 45 (11): 3397–3404. https://doi.org/10.1109/TGRS.2007.895883.
Reiche, Johannes, Sytze de Bruin, Dirk Hoekman, Jan Verbesselt, and Martin Herold. 2015. “A Bayesian Approach to Combine Landsat and ALOS PALSAR Time Series for Near Real-Time Deforestation Detection.” Remote Sensing 7 (5): 4973–96. https://doi.org/10.3390/rs70504973.
Reigber, Andreas, and Alberto Moreira. 2000. “First Demonstration of Airborne SAR Tomography Using Multibaseline l-Band Data.” IEEE Transactions on Geoscience and Remote Sensing 38 (5): 2142–52. https://doi.org/10.1109/36.868873.
Rignot, Eric, Steven J Ostro, Jakob J Van Zyl, and Kenneth C Jezek. 1995. “Backscatter Model for the Unusual Radar Properties of the Greenland Ice Sheet.” Journal of Geophysical Research: Planets 100 (E5): 9389–9400. https://doi.org/10.1029/95JE00485.
Robinson, Clare, Sassan Saatchi, Mathias Neumann, and Thomas Gillespie. 2013. “Impacts of Spatial Variability on Aboveground Biomass Estimation from l-Band Radar in a Temperate Forest.” Remote Sensing 5 (3): 1001–23. https://doi.org/10.3390/rs5031001.
Rosen, Paul A, Scott Hensley, Ian R Joughin, Fuk K Li, Søren N Madsen, Ernesto Rodriguez, and Richard M Goldstein. 2000. “Synthetic Aperture Radar Interferometry.” Proceedings of the IEEE 88 (3): 333–82. https://doi.org/10.1109/5.838084.
Rosen, Paul, and Ramarao Kumar. 2017. “The NASA-ISRO SAR (NISAR) Mission Dual-Band Radar Instrument Preliminary Design.” IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 3832–35. https://doi.org/10.1109/IGARSS.2017.8127836.
Saatchi, Sassan S, Nancy L Harris, Sandra Brown, Michael Lefsky, Edward TA Mitchard, William Salas, Brian R Zutta, et al. 2011. “Benchmark Map of Forest Carbon Stocks in Tropical Regions Across Three Continents.” Proceedings of the National Academy of Sciences 108 (24): 9899–9904. https://doi.org/10.1073/pnas.1019576108.
Santoro, Maurizio, Christian Beer, Oliver Cartus, Christiane Schmullius, Anatoly Shvidenko, Ian McCallum, Urs Wegmüller, and Andreas Wiesmann. 2011. “Estimates of Forest Growing Stock Volume for Sweden, Central Siberia, and Québec Using Envisat Advanced Synthetic Aperture Radar Backscatter Data.” Remote Sensing 3 (1): 207–27. https://doi.org/10.3390/rs3010207.
Santoro, Maurizio, Oliver Cartus, Nuno Carvalhais, Danaë MA Rozendaal, Valerio Avitabile, Arnan Araza, Sytze de Bruin, et al. 2021. “The Global Forest Above-Ground Biomass Pool for 2010 Estimated from High-Resolution Satellite Observations.” Earth System Science Data 13 (8): 3927–50. https://doi.org/10.5194/essd-13-3927-2021.
Shen, Xinyi, Dacheng Wang, Kebiao Mao, Emmanouil Anagnostou, and Yang Hong. 2019. “Inundation Extent Mapping by Synthetic Aperture Radar: A Review.” Remote Sensing 11 (7): 879. https://doi.org/10.3390/rs11070879.
Singleton, Andrew, Zhenhong Li, Trevor Hoey, and Jan-Peter Muller. 2014. “Evaluating Sub-Pixel Offset Techniques as an Alternative to d-InSAR for Monitoring Episodic Landslide Movements in Vegetated Terrain.” Remote Sensing of Environment 147: 133–44. https://doi.org/10.1016/j.rse.2014.03.003.
Small, David. 2011. “Flattening Gamma: Radiometric Terrain Correction for SAR Imagery.” IEEE Transactions on Geoscience and Remote Sensing 49 (8): 3081–93. https://doi.org/10.1109/TGRS.2011.2120616.
Strozzi, Tazio, Adrian Luckman, Tavi Murray, Urs Wegmüller, and Charles L Werner. 2002. “Glacier Motion Estimation Using SAR Offset-Tracking Procedures.” IEEE Transactions on Geoscience and Remote Sensing 40 (11): 2384–91. https://doi.org/10.1109/TGRS.2002.805079.
Tebaldini, Stefano. 2012. “Multibaseline Polarimetric SAR Tomography of a Boreal Forest at p-and l-Bands.” IEEE Transactions on Geoscience and Remote Sensing 50 (1): 232–46. https://doi.org/10.1109/TGRS.2011.2159614.
Tebaldini, Stefano, Mauro Mariotti d’Alessandro, Francesco Banda, and Andrea Monti Guarnieri. 2020. “The Status of Technologies to Measure Forest Biomass and Structural Properties: State of the Art in SAR Tomography of Tropical Forests.” Surveys in Geophysics 41: 779–801. https://doi.org/10.1007/s10712-019-09551-z.
Tebaldini, Stefano, and Fabio Rocca. 2010. “Single and Multipolarimetric SAR Tomography of Forested Areas: A Parametric Approach.” IEEE Transactions on Geoscience and Remote Sensing 48 (5): 2375–87. https://doi.org/10.1109/TGRS.2009.2037748.
Torres, Ramon, Paul Snoeij, Dirk Geudtner, David Bibby, Malcolm Davidson, Evert Attema, Pierre Potin, et al. 2012. “GMES Sentinel-1 Mission.” Remote Sensing of Environment 120: 9–24. https://doi.org/10.1016/j.rse.2011.05.028.
Treuhaft, Robert N, Bruce D Chapman, João Roberto dos Santos, Fábio Guimarães Gonçalves, Luciano Vieira Dutra, Paulo Maurício Lima de Alencastro Graça, and John B Drake. 2010. “Biomass Estimation in a Tropical Wet Forest Using Fourier Transforms of Profiles from Lidar or Interferometric SAR.” Geophysical Research Letters 37 (23). https://doi.org/10.1029/2010GL045608.
Treuhaft, Robert N, and Paul R Siqueira. 1996. “Vertical Structure of Vegetated Land Surfaces from Interferometric and Polarimetric Radar.” Radio Science 31 (6): 1449–85. https://doi.org/10.1029/96RS01763.
Ulaby, Fawwaz T, and David G Long. 2013. Microwave Radar and Radiometric Remote Sensing. Artech House.
Ulaby, Fawwaz T, Kamal Sarabandi, Kyle McDonald, Michael Whitt, and M Craig Dobson. 1990. “Michigan Microwave Canopy Scattering Model.” International Journal of Remote Sensing 11 (7): 1223–53. https://doi.org/10.1080/01431169008955090.
Ulander, Lars MH. 1996. “Radiometric Slope Correction of Synthetic-Aperture Radar Images.” IEEE Transactions on Geoscience and Remote Sensing 34 (5): 1115–22. https://doi.org/10.1109/36.536527.
Wegmüller, Urs, and Charles L Werner. 1995. “SAR Interferometric Signatures of Forest.” IEEE Transactions on Geoscience and Remote Sensing 33 (5): 1153–61. https://doi.org/10.1109/36.469479.
Wigneron, Jean-Pierre, Thomas J Jackson, Peggy O’neill, Gabrielle De Lannoy, Patricia de Rosnay, Jeffrey P Walker, Paolo Ferrazzoli, et al. 2017. “Modelling the Passive Microwave Signature from Land Surfaces: A Review of Recent Results and Application to the l-Band SMOS & SMAP Soil Moisture Retrieval Algorithms.” Remote Sensing of Environment 192: 238–62. https://doi.org/10.1016/j.rse.2017.01.024.
Woodhouse, Iain H. 2017. Introduction to Microwave Remote Sensing. Boca Raton, Florida: CRC Press. https://doi.org/https://doi.org/10.1201/9781315272573.
Yamaguchi, Yoshio, Akitsugu Sato, Wolfgang-Martin Boerner, Ryoichi Sato, and Hiroyoshi Yamada. 2011. “Four-Component Scattering Power Decomposition with Rotation of Coherency Matrix.” IEEE Transactions on Geoscience and Remote Sensing 49 (6): 2251–58. https://doi.org/10.1109/TGRS.2010.2099124.
Zebker, Howard A, and Richard M Goldstein. 1986. “Topographic Mapping from Interferometric Synthetic Aperture Radar Observations.” Journal of Geophysical Research: Solid Earth 91 (B5): 4993–99. https://doi.org/10.1029/JB091iB05p04993.
Zebker, Howard A, and John Villasenor. 1992. “Decorrelation in Interferometric Radar Echoes.” IEEE Transactions on Geoscience and Remote Sensing 30 (5): 950–59. https://doi.org/10.1109/36.175330.