![]() DIGITAL IMAGEIn orbital remote sensing, a large number of data is used to represent an image, that can be manipulated in the digital format to extract information from them. Each point captured by the sensor corresponds to a minimum area called pixel (picture cell), which must be geographically identified, and for which digital values are registered, related to the energy reflected in well defined slices (bands) in the electromagnetic spectrum. A digital image can be defined by a bidimensional function of the light intensity reflected or emitted by a scene, as I(x,y), where I stands for the light intensity reflected at the spatial coordinate (x,y). This intensity is represented by an integer, non-negative, and finite value called gray level. In order to perform the digital processing on remote sensing data images it is required that the images are in a digital format. Basically there are two ways to get a digital image: (1) acquire the remote sensing image in the analogical format (for instance an aerial photography) and then digitize it, or (2) acquire the remote sensing image in the digital format, such as the data in the CCT (Computer Compatible Tape) tape, recorded by satellites, such as Landsat or SPOT. The image digitizing process corresponds to a discretizing (or sampling) process over the scene being considered, by super positioning a hypothetical mesh, and an integer attribution (the gray levels) to each point in this mesh (this process is known as quantization). In satellites, such as the Landsat and SPOT, the electrical signal detected in each of its channels is converted on board by an analogical/digital system, and its output is sent to the receiving stations by telemetry. The image from these satellites are sampled with a large number of points (the images of the Thematic Mapper sensor on board of the Landsat satellite has over 6000 samples per row). Besides this, the images are multispectral, so they are a collection of images from a single scene, at the same moment, obtained by several sensors each one with a different spectral response.
Image characterizationIt is possible to represent an image by a data matrix, where the rows and columns define the pixel spatial coordinates. For this it is used a finite number of bits to represent the scene radiance for each pixel. Radiance is the radiant flow from a source, in a given direction, by area unit. The radiance measure is represented in each pixel by its gray level, it is not only the radiant reflected by the surface in the pixel scene, but also the radiant given by the atmospheric spreading. The continuous radiance quantification in a scene is represented by discrete gray levels in the digital image, which is given by the number of bits used per pixel to produce a radiance interval. The new generation sensors usually get images in either 8 or 10 bits (thus 256 or 1024 different gray levels). The gray level is represented by the average radiance from a relatively small area in the scene. This area is determined by the sensor's height in the satellite and other parameters, such as the IFOV (Instantaneous Field Of View), which is the angle formed by the geometrical projection of a single detector element over the Earth surface. The figure below shows the coordinates system usually used to represent a digital image. The x axis gives the number of columns, and the y axis the number of rows.
![]() In the multispectral image case the digital representation is more complex, because for each (x,y) coordinate there will be a set of gray level values. Thus each pixel is represented by a vector with as many dimensions as the spectral bands. Spectral band is the interval between two wave lengths, in the electromagnetic spectrum
Resolution and BandsThe SPRING allows the direct input of images from the Landsat, SPOT and NOAA satellites. Each of these images present distinct resolution characteristics. Analogical images such as photography on paper can also be treated by SPRING, they can be imported in the TIFF format once they have been scanned. Resolution is an ability measure that a system sensor has to distinguish among responses that are spectrally similar or spatially close. The resolution can be classified in spatial, spectral, or radiometric. Spatial Resolution: it measures the small angular or linear separation between two objects. For instance, a resolution of 20 meters implies that objects which are less than 20 meters apart, in general, will not be discriminated by the system. Spectral Resolution: it is the width measure of the spectral slices of the sensor system. For instance, a sensor operating at 0.4 to 0.45 m slice, hasaspectral resolution smaller than the sensor operating in the 0.4 to 0.5 um slice.
Radiometric Resolution:
it is associated to the sensor system sensibility distinguishing two different intensity levels of the
returning signal. For instance, a resolution of 10 bits (1024 digital levels) is better than a resolution
of 8 bits. The table below presents the resolution characteristics of the Thematic Mapper (TM), Haute Resolution Visible (HRV) and Advanced Very High Resolution Radiometer (AVHRR) sensor systems, on board of the Landsat, SPOT, and NOAA satellites respectively.
The sensors different spectral bands have distinct application in remote sensing studies. As a hint, the
user should see the table below to select the best band for his/hers project.
![]() Image FormatSpring expects input in several different devices, in the superstructure or fast-format formats for the sensor systems described above.
Superstructure FormatThe superstructure format (tapes and CDROM standards) presents a data organization in four distinct hierarchical levels: volume, the file, the register and the data fields. A file group compound a logical volume, which can be stored in several physical volumes (tapes) and a physical volume can store several logical volumes, that is, it is possible to have a tape with several files (bands), or one band in more than one physical volume. The superstructure basic components are: the volume directory file and the file descriptor. The volume directory file defines and identifies a logical volume (for instance a set of bands). The file descriptor is the first register inside each data file (each band) and defines the file's internal structure providing parameters for its contents interception. Fast-format formatThe fast-format format has a minimal amount of general data, compressing the data the maximum possible in a tape, so reading and writing become easier. This format is available only for the sequential image band structure (BSQ), using the TM/Landsat images. The image files are placed in a single tape and each tape might have more than one file. There are two file types in a fast-format tape the header file and the image file. The header file is the first file in each tape and it has the data description such as date, processing options and projection information for the product. The image files has only the pixel information. These data can be blocked or not. The blocking is used to condense an image, the maximum possible. Most times the geocoded images are blocked.
Next it will be presented the recording pattern used for the TM-Landsat, HRV_SPOT and AVHRR-NOAA sensors.
![]() The TM/Landsat image
The TM/Landsat image read by SPRING has to be in the BSQ standard of sequential bands. This standard is frequently generated by
INPE at Cachoeira Paulista labs and it is available in the superstructure and fast-format formats. In the BSQ standard the image is registered in the tape, band by band, as shown in the scheme below:
![]() The user can select the digital products of the TM-Landsat tapes with geometrical correction levels. The possible levels are: 4, 5, and 6, described below.
Level 4 - The INPE standard product is generated at this level. The geometrical correction computations are applied, using the ephemerides and attitude data from the satellite Level 5 - The procedures are the same ones used at level 4, the basic geometrical correction with resampling using the nearest neighbor technique and control points acquired from an official cartographic base Level 6 - The procedures are similar as the level 5 ones, using resampling by cubic convolution.
The scene size in a TM/Landsat image is 6177 rows by 6489 columns, which can be divided into quadrants with 3087 rows and 3243 columns each. The quadrants are placed in the scene as presented in the figure below:
Landsat images in a CDROMThe TM/Landsat data in CDROM are distributed in full frame format (about 185 x 185 Km) or quadrants (about 96 x 96 Km) from 1 to 7 spectral bands. All scenes are given with the same basic radiometric correction, consisting of sensors response equalization, such that it removes the stripping effect of the TM/Landsat data. The histogram equalization or corrections are not applied for the sun elevation angle. The CDROM is formatted in the IBM-DOS standard, and it can be read in any reading device accepting optical disks in the ISO-9660 standard. The disk is structured in sub-directories:
For instance band 7 of the Rio de Janeiro scene, quadrant A, acquired in January 31, 1994, should be accessed with the name: \217_076a\940131\band7.dat The CDROM images are recorded in the superstructure format. The CDROM might also have a \DEMO directory, with some demonstration images in the TIFF or JPEG format.
![]() HRV/SPOT ImageThe SPRING image reading program ( IMPIMA), allows reading HRV/SPOT images in the band interleaved by line (BIL) format, usually generated by INPE at Cachoeira Paulista Lab, and available for the general user. In the BIL format, each row is recorded sequentially for all bands, as shown in the scheme below:
![]()
The user may select the digital products of the HRV/SPOT tape with geometrical correction levels. The possible levels are:1A, 1B, 2A and
2B, described below. Level 1A: the image has the original data with radiometric calibration, absolute and relative, through the detectors normalization, without geometrical correction and calibration among bands. Level 1B: the radiometric correction is the same as level 1A, but using resampling to compensate the internal and external system effects and the geometrical correction, for the perspective, Earth rotation, and satellite speed variation effects. Level 2A: the radiometric correction is the same as level 1B and presents a geometrical pre-processing over a map, using the satellite attitude data. Level 2B: the image has a geometrical correction over a map, using control points in the ground and satellite attitude data.
The scene format definition, in a SPOT image, depends on multispectral information (bands 1, 2 and 3) or pan-chromatic (pan) or even on the image correction level. The user has access to these information in a listing provided with the tape.
The SPOT image size is defined according to the correction level, as presented in the table below:
The SPOT image scene can also be divided into quadrants, as shown in the figure below. Each quadrant represents an area about 40 x 40 Km.
![]()
The user may request a scene which is located between quadrants. In this case the user has to
identify the desired area in the image and define a 40 x 40 Km square involving the area of
interest.
![]() AVHRR/NOAA image
The AVHRR/NOAA image provided by INPE is in the band interleaved by pixel (BIP). In the BIP format,
each pixel is recorded sequentially for all bands, as presented in the scheme below.
where: n is the pixel number that will be recorded in the next row. The tapes can be recorded in 10 bits (full) or 8 bits (compress). A tape recorded in 10 bits can have up to five registered bands and has the following configuration, as presented in the figure below:
![]() where: (1) "Header": presents the satellite characteristics, recording date, format, etc. (2) Geodesic Reference Matrix (GRM): presents the image navigation data. (3) TIP data: image documentation data, such as row, column, resolution, etc. (4) AHVRR data: the image itself. A tape using 8 bits to store the data might have up to three registered bands and has the following configuration:
![]() The AVHRR images recorded by INPE at Cachoeira Paulista Labs do not present any radiometric or geometric correction. The AVHRR image size is defined by the sensor scanning angle, that is, 2048 samples (pixels) per channel, for each Earth scanning. For the tapes recorded at Cachoeira Paulista, the number of columns is given by the receiving antenna range, for instance: the recording starts closer to the Rio Grande do Sul region, and goes up to the Central Amazon region.
![]() GRIB FilesGRIB (Gridded binary) is a grid point values format expressed in the binary mode. Its purpose is to increase the transmission rate and save storage memory, since it is a compacted data format. The GRIB format is presented grouped in blocks:
NOTE: The result of the IMPIMA reading module and internally in the SPRING module, images are stored using the GRIB format.
In the SPRING, when the user wants to load an image in the working Project, it has to be
performed using the Importing GRIB files sub menu in the
File bar menu, with images in the GRIB format. The image will be automatically
converted to the current project's projection, and its limits will be defined according to the
project (see how to perform image georeferencing when
importing GRIB images). ![]() SPRING ImagesThe available techniques to enter remote sensors images in SPRING are:
![]() As it can be seen in the figure above there are two different procedures to read an image in the Spring module. See below the meaning of each process:
![]() ![]() |