DIGITAL IMAGE

In orbital remote sensing, a large number of data is used to represent an image, that can be manipulated in the digital format to extract information from them. Each point captured by the sensor corresponds to a minimum area called pixel (picture cell), which must be geographically identified, and for which digital values are registered, related to the energy reflected in well defined slices (bands) in the electromagnetic spectrum.

A digital image can be defined by a bidimensional function of the light intensity reflected or emitted by a scene, as I(x,y), where I stands for the light intensity reflected at the spatial coordinate (x,y). This intensity is represented by an integer, non-negative, and finite value called gray level.

In order to perform the digital processing on remote sensing data images it is required that the images are in a digital format. Basically there are two ways to get a digital image: (1) acquire the remote sensing image in the analogical format (for instance an aerial photography) and then digitize it, or (2) acquire the remote sensing image in the digital format, such as the data in the CCT (Computer Compatible Tape) tape, recorded by satellites, such as Landsat or SPOT.

The image digitizing process corresponds to a discretizing (or sampling) process over the scene being considered, by super positioning a hypothetical mesh, and an integer attribution (the gray levels) to each point in this mesh (this process is known as quantization).

In satellites, such as the Landsat and SPOT, the electrical signal detected in each of its channels is converted on board by an analogical/digital system, and its output is sent to the receiving stations by telemetry. The image from these satellites are sampled with a large number of points (the images of the Thematic Mapper sensor on board of the Landsat satellite has over 6000 samples per row). Besides this, the images are multispectral, so they are a collection of images from a single scene, at the same moment, obtained by several sensors each one with a different spectral response.


Input of images
For the input of data images, the available techniques in SPRING are:


See also:
About Remote Sensing.
Image Registration.
How to get other system's conceptual information.


Image characterization

It is possible to represent an image by a data matrix, where the rows and columns define the pixel spatial coordinates. For this it is used a finite number of bits to represent the scene radiance for each pixel.

Radiance is the radiant flow from a source, in a given direction, by area unit.

The radiance measure is represented in each pixel by its gray level, it is not only the radiant reflected by the surface in the pixel scene, but also the radiant given by the atmospheric spreading.

The continuous radiance quantification in a scene is represented by discrete gray levels in the digital image, which is given by the number of bits used per pixel to produce a radiance interval. The new generation sensors usually get images in either 8 or 10 bits (thus 256 or 1024 different gray levels).

The gray level is represented by the average radiance from a relatively small area in the scene. This area is determined by the sensor's height in the satellite and other parameters, such as the IFOV (Instantaneous Field Of View), which is the angle formed by the geometrical projection of a single detector element over the Earth surface.

The figure below shows the coordinates system usually used to represent a digital image. The x axis gives the number of columns, and the y axis the number of rows.

In the multispectral image case the digital representation is more complex, because for each (x,y) coordinate there will be a set of gray level values. Thus each pixel is represented by a vector with as many dimensions as the spectral bands.

Spectral band is the interval between two wave lengths, in the electromagnetic spectrum




Resolution and Bands

The SPRING allows the direct input of images from the Landsat, SPOT and NOAA satellites. Each of these images present distinct resolution characteristics. Analogical images such as photography on paper can also be treated by SPRING, they can be imported in the TIFF format once they have been scanned.

Resolution is an ability measure that a system sensor has to distinguish among responses that are spectrally similar or spatially close. The resolution can be classified in spatial, spectral, or radiometric.

Spatial Resolution: it measures the small angular or linear separation between two objects. For instance, a resolution of 20 meters implies that objects which are less than 20 meters apart, in general, will not be discriminated by the system.

Spectral Resolution: it is the width measure of the spectral slices of the sensor system. For instance, a sensor operating at 0.4 to 0.45 m slice, hasaspectral resolution smaller than the sensor operating in the 0.4 to 0.5 um slice.

Radiometric Resolution: it is associated to the sensor system sensibility distinguishing two different intensity levels of the returning signal. For instance, a resolution of 10 bits (1024 digital levels) is better than a resolution of 8 bits.

The table below presents the resolution characteristics of the Thematic Mapper (TM), Haute Resolution Visible (HRV) and Advanced Very High Resolution Radiometer (AVHRR) sensor systems, on board of the Landsat, SPOT, and NOAA satellites respectively.

TM
HRV
AVHRR
Image Acquisition
frequency
16 days26 daystwice a day
Spatial Resolution
30 m
120 m (Band6)
20 m (Band1 to 3)
10 m (Pan)
1.1 Km (nominal)
Radiometric Resolution
8 bits8 bits (1-3)
6 bits (Pan)
10 bits
Spectral Resolution

spectral bands

(micrometers)
Band1 - 0.45-0.52
Band2 - 0.52-0.60
Band3 - 0.63-0.69
Band4 - 0.76-0.90
Band5 - 1.55-1.75
Band6 - 10.74-12.5
Band7 - 2.08-2.35
Band1 - 0.50-0.59
Band2 - 0.61-0.68
Band3 - 0.79-0.89
Pan - 0.51-0.73
Band 1 - 0.58-0.68
Band 2 - 0.725-1.1
Band 3 - 3.55-3.93
Band 4 - 10.30-11.30
Band 5 - 11.50-12.50

The sensors different spectral bands have distinct application in remote sensing studies. As a hint, the user should see the table below to select the best band for his/hers project.

Landsat Satellite - Sensor TM
ChannelSpectral Band (um) Main applications
1
0.45 - 0.52
Coast mapping
Soil and Vegetation Differentiation
Coniferous and Decidual Vegetation Differentiation
2
0.52 - 0.60
3
0.63 - 0.69
Chlorophyll Absorption
Vegetal species differentiation
4
0.76 - 0.90
Biomass survey
Water bodies outlining
5
1.55 - 1.75
Vegetation moisture measurements
Clouds and snow differentiation
6
10.4 - 12.5
Plants thermal stress mapping
Other thermal mapping
7
2.08 - 2.35
Hydrothermal mapping


SPOT Satellite - HRV Sensor
ChannelSpectral Band (um) Main applications
1
0.50 - 0.59
Healthy green vegetation reflectance
Water mapping
2
0.61 - 0.68
Chlorophyll Absorption
Vegetal species differentiation
Soil and Vegetation Differentiation
3
0.79 - 0.89
Phytomass survey
Water bodies outlining
Pan
0.51 - 0.73
Urban areas study


NOAA Satellite - AVHRR Sensor
ChannelSpectral band (um) Main applications
1
0.58 - 0.68
Clouds, ice and snow daylight mapping.
Soil features and vegetal coverage definitions.
2
0.725 - 1.1
Water surface outlining
Snow and ice melting condition definitions
Vegetation evaluation and meteorological monitoring (clouds).
3
3.55 - 3.93
Day and night clouds mapping
Ocean surface temperature analysis (C)
Hot spot detection (fire).
4 and 5
10.30 - 11.30 (4)

11.50 - 12.50 (5)
Day and night clouds mapping.
Ocean, lakes and rivers surface measures.
Volcanic eruption detection.
Soil moisture, clouds meteorological attributes.
Ocean surface temperature and soil moisture.


Image



Image Format

Spring expects input in several different devices, in the superstructure or fast-format formats for the sensor systems described above.

Superstructure Format

The superstructure format (tapes and CDROM standards) presents a data organization in four distinct hierarchical levels: volume, the file, the register and the data fields. A file group compound a logical volume, which can be stored in several physical volumes (tapes) and a physical volume can store several logical volumes, that is, it is possible to have a tape with several files (bands), or one band in more than one physical volume. The superstructure basic components are: the volume directory file and the file descriptor.

The volume directory file defines and identifies a logical volume (for instance a set of bands). The file descriptor is the first register inside each data file (each band) and defines the file's internal structure providing parameters for its contents interception.

Fast-format format

The fast-format format has a minimal amount of general data, compressing the data the maximum possible in a tape, so reading and writing become easier. This format is available only for the sequential image band structure (BSQ), using the TM/Landsat images.

The image files are placed in a single tape and each tape might have more than one file. There are two file types in a fast-format tape the header file and the image file.

The header file is the first file in each tape and it has the data description such as date, processing options and projection information for the product.

The image files has only the pixel information. These data can be blocked or not. The blocking is used to condense an image, the maximum possible. Most times the geocoded images are blocked.

Next it will be presented the recording pattern used for the TM-Landsat, HRV_SPOT and AVHRR-NOAA sensors.

Image



The TM/Landsat image

The TM/Landsat image read by SPRING has to be in the BSQ standard of sequential bands. This standard is frequently generated by INPE at Cachoeira Paulista labs and it is available in the superstructure and fast-format formats.

In the BSQ standard the image is registered in the tape, band by band, as shown in the scheme below:

The user can select the digital products of the TM-Landsat tapes with geometrical correction levels. The possible levels are: 4, 5, and 6, described below.

    Level 4 - The INPE standard product is generated at this level. The geometrical correction computations are applied, using the ephemerides and attitude data from the satellite

    Level 5 - The procedures are the same ones used at level 4, the basic geometrical correction with resampling using the nearest neighbor technique and control points acquired from an official cartographic base

    Level 6 - The procedures are similar as the level 5 ones, using resampling by cubic convolution.

The scene size in a TM/Landsat image is 6177 rows by 6489 columns, which can be divided into quadrants with 3087 rows and 3243 columns each. The quadrants are placed in the scene as presented in the figure below:

Quadrants
A=1,2,,5,6N= 2,3,6,7
B=3,4,7,8S= 10,11,14,15
C=9,10,13,14W= 5,6,9,10
D=11,12,15,16E= 7,8,11,12
X=6,7,10,11

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16



Landsat images in a CDROM

The TM/Landsat data in CDROM are distributed in full frame format (about 185 x 185 Km) or quadrants (about 96 x 96 Km) from 1 to 7 spectral bands. All scenes are given with the same basic radiometric correction, consisting of sensors response equalization, such that it removes the stripping effect of the TM/Landsat data. The histogram equalization or corrections are not applied for the sun elevation angle.

The CDROM is formatted in the IBM-DOS standard, and it can be read in any reading device accepting optical disks in the ISO-9660 standard. The disk is structured in sub-directories:

  • in the root directory, some general files are located, such as this formatting documentation and a program file that converts the data from the CDROM format to the TIFF format.
  • one or more directories with the WRS scene identification. For instance, a full frame image from Rio de Janeiro (base 217, point 76) will be located at the \217_076 directory. If the image is a quadrant, the quadrant acronym will also be part of the directory's name. For instance the A quadrant of the same scene will be in the \216_076A directory.
  • in each directory, there will be one or more sub-directories with the scene acquisition date. The sub-directory general format is \yymmdd, where "yy" are the last two digits of the acquisition year, "mm" is the acquisition month, and "dd" is the acquisition day. For instance if the scene was acquired in January 31st, 1994, it will be located at the \940131 sub-directory.
  • in the respective sub-directories one can find the image files, one for each requested band, as well as some product descriptive files (similar to the superstructure CCT files) (see a detailed description ahead). Each image file is named just as BAND#.DAT, where "#" is the band number.

For instance band 7 of the Rio de Janeiro scene, quadrant A, acquired in January 31, 1994, should be accessed with the name: \217_076a\940131\band7.dat

The CDROM images are recorded in the superstructure format.

The CDROM might also have a \DEMO directory, with some demonstration images in the TIFF or JPEG format.


See also:
Orbital Systems.


Image



HRV/SPOT Image

The SPRING image reading program ( IMPIMA), allows reading HRV/SPOT images in the band interleaved by line (BIL) format, usually generated by INPE at Cachoeira Paulista Lab, and available for the general user.

In the BIL format, each row is recorded sequentially for all bands, as shown in the scheme below:

The user may select the digital products of the HRV/SPOT tape with geometrical correction levels. The possible levels are:1A, 1B, 2A and 2B, described below.

    Level 1A: the image has the original data with radiometric calibration, absolute and relative, through the detectors normalization, without geometrical correction and calibration among bands.

    Level 1B: the radiometric correction is the same as level 1A, but using resampling to compensate the internal and external system effects and the geometrical correction, for the perspective, Earth rotation, and satellite speed variation effects.

    Level 2A: the radiometric correction is the same as level 1B and presents a geometrical pre-processing over a map, using the satellite attitude data.

    Level 2B: the image has a geometrical correction over a map, using control points in the ground and satellite attitude data.

The scene format definition, in a SPOT image, depends on multispectral information (bands 1, 2 and 3) or pan-chromatic (pan) or even on the image correction level.

The user has access to these information in a listing provided with the tape.

The SPOT image size is defined according to the correction level, as presented in the table below:

Level
Mode
Number of Rows
Number of Columns
1A
P

XS
6.000

6.000
6.000

3.000
1B
P

XS
6.000

3.000
6.400 a 8.500

3.200 a 4.250
2A/2B
P

XS
7.200 a 10.200

3.600 a 5.100
7.500 a 10.200

3.750 a 5.100

The SPOT image scene can also be divided into quadrants, as shown in the figure below. Each quadrant represents an area about 40 x 40 Km.


The user may request a scene which is located between quadrants. In this case the user has to identify the desired area in the image and define a 40 x 40 Km square involving the area of interest.
See also:
Orbital Systems.


Image



AVHRR/NOAA image

The AVHRR/NOAA image provided by INPE is in the band interleaved by pixel (BIP). In the BIP format, each pixel is recorded sequentially for all bands, as presented in the scheme below.

Register RowsColumns
11 B1.1 B2.1 B3.1 B4.1 B5.1 B1.2 B2.2
22 B1.n B2.n B3.n B4.n B5.n B1.n+1...

where: n is the pixel number that will be recorded in the next row.

The tapes can be recorded in 10 bits (full) or 8 bits (compress). A tape recorded in 10 bits can have up to five registered bands and has the following configuration, as presented in the figure below:

where: (1) "Header": presents the satellite characteristics, recording date, format, etc.

(2) Geodesic Reference Matrix (GRM): presents the image navigation data.

(3) TIP data: image documentation data, such as row, column, resolution, etc.

(4) AHVRR data: the image itself.

A tape using 8 bits to store the data might have up to three registered bands and has the following configuration:


The AVHRR images recorded by INPE at Cachoeira Paulista Labs do not present any radiometric or geometric correction.

The AVHRR image size is defined by the sensor scanning angle, that is, 2048 samples (pixels) per channel, for each Earth scanning. For the tapes recorded at Cachoeira Paulista, the number of columns is given by the receiving antenna range, for instance: the recording starts closer to the Rio Grande do Sul region, and goes up to the Central Amazon region.


See also:
Orbital Systems.


Image



GRIB Files

GRIB (Gridded binary) is a grid point values format expressed in the binary mode. Its purpose is to increase the transmission rate and save storage memory, since it is a compacted data format.

The GRIB format is presented grouped in blocks:

  • block 0: presents the size of the file;
  • block 1: presents the product identification (who generated it, date, etc.);
  • block 2: presents the grid description (number of rows, number of columns, resolution, projection used for data generation;
  • block 3: indicates the insertion or removal of data in certain points, today, this block is not being used in the SPRING's GRIB file generation;
  • block 4: presents the data itself in a raster format. In this matrix, the data are presented using any bit size;
  • block 5: presents the file termination information.

NOTE: The result of the IMPIMA reading module and internally in the SPRING module, images are stored using the GRIB format.

In the SPRING, when the user wants to load an image in the working Project, it has to be performed using the Importing GRIB files sub menu in the File bar menu, with images in the GRIB format. The image will be automatically converted to the current project's projection, and its limits will be defined according to the project (see how to perform image georeferencing when importing GRIB images).

Image

SPRING Images

The available techniques to enter remote sensors images in SPRING are:

  • read tapes, CDROM, or even the hard drive using the IMPIMA module and later do the registration using the Spring module (see the steps from ima1 to ima2 and from ima2 to ima3 in the figure below);
  • importing using the Spring module directly from other formats such as TIFF, RAW or SITIM (the steps from ima1 to ima3 in the figure below).
ima_grib.gif - 10177 Bytes

As it can be seen in the figure above there are two different procedures to read an image in the Spring module. See below the meaning of each process:

  • from ima1 to ima2 the "Impima" module is used to read the images in several different formats: BSQ, BIL, TIFF, RAW, GRIB and SITIM, which has as an output another image in the GRIB format. In the output, the image can be smaller (sub-scene) or with the same size as the input image;
  • from ima2 to ima3 the "Spring" module is used to register the GRIB image read by "Impima". In this process the user must correct the image geographically, using control points acquired by the user and this GRIB file is imported to a real project previously defined. If desired it is possible to import the GRIB file even if there is no project, letting the system to create one without any cartographic projection.
  • from ima1 to ima3 the "Spring" is used to import images in one of the formats mentioned above. This importing process can be for a real, existing, project, or a project to be defined, as far as the image has a perfect adjust to the project involving rectangle, as a function of the number of rows, number of columns and image resolution. It is possible to import the image for a project without a projection, just give the parameters (resolution, number of rows and columns) for the image.


NOTE: Internally the "Spring" module stores the images in the GRIB format, where each band (IL) will be represented by a GRIB file. This is different than a GRIB file generated by the "Impima", which can have several bands from the same scene.



Image