Photogeology Notes PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

INTRODUCTION TO AERIAL PHOTOGRAPHY –

Aerial photography has been defined as the science of taking a photo graph from a point in the air for the
purpose of making some type of study of the surface of the earth . In the study of physical and geologi cal
features , the use of aerial photographs has been proved to be of immense help . The production of suitable
aerial photographs involves an understanding of many complex steps and processes . The net result of any
aerial photographic mission is the photographic negative . The photographic negatives are the result of both
favourable and unfavour able factors acting simultaneously . A good aerial photograph has to have certain
standard of geometrical accuracy . It should also have a suitable tonal contrast to present faithfully a record of
the terrain photographed . For photo - interpretation purposes , the image quality is of vital importance .

Aerial photography was the first method of remote sensing and even today in the age of the satellite and
electronic scanner , aerial photographs still remain the most widely used type remotely sensed data . The six
characteristics of aerial photography that make it so popular are its availability , economy , synoptic viewpoint
, time freezing ability , spectral and spatial resolution and three dimensional perspective .

1. Availability : aerial photographs are readily available at a range of scales for much of the world .

2. Economy : aerial photographs are cheaper than field surveys and are often cheaper and more accurate
than maps for many countries of the world .

3. Synoptic viewpoint : aerial photographs enable the detection of small scale features and spatial
relationships that would not be evident on the ground .

4. Time freezing ability : an aerial photograph is a record of the Earth's surface at one point in time and can
therefore be used as an historical record

. 5. Spectral and spatial resolution : aerial photographs are sensitive to radiation in wavelengths that are
outside of the spectral sensitivity range of the human eye , as they can sense both ultra - violet ( 0.3-0.4 μm )
and near infrared ( 0.7-0.9 μm ) radiation . They can also be sensitive to objects outside the spatial resolving
power of the human eye .

6. Three dimensional perspective : a stereoscopic , view of the Earth's surface can be created and measured
both horizontally and vertically ; a characteristic that is lacking for the majority of remotely sensed images .

TYPES OF AERIAL PHOTOS –


Classification of Aerial Photograph
Aerial photograph can be classified on various basis, such as the position of camera axis, R.F. scale, coverage angle,
combination of film and filter, lens system, etc.

1. According to camera axis position


On the basis of the camera axis position, it is classified into three groups. A camera axis denotes the imaginary line
joining the centre of the photo-plane and the focus point on the ground through the centre of the camera lens. These three
types are as follows.
 Vertical Aerial Photograph : When the camera axis generates a perfect 90° angle with the ground or the earth surface,
then the obtained photograph is called the vertical photograph. But it is quite hard to obtain a perfect vertical aerial
photograph because of the curvature of the earth surface. Thenceforth a deviation of ∓3° is considered.
 Low-Oblique Aerial Photograph : Similarly, when the camera axis are inclined between 15° and 30°, then the aerial
photograph so obtained is called the low-oblique aerial photograph.
 High-Oblique Aerial Photograph : When the camera axis are inclined to 60° or more, then the aerial photograph so
obtained is called the high-oblique aerial photograph.
The high-oblique and low-oblique aerial photographs have a special significance in the fields of reconnaissance survey.

2. Based on the scale of photograph


If the R.F. or Representative Fraction scale of the imagery are considered, then aerial photograph is classified into
another three types. The fact is that a small scale represents a larger geographic area at a relative size of the photograph.
Whereas a large scale covers relatively small geographic area at the considered size of the photograph, and provides a
better details than that of the small scale photograph. Thus on this basis, the three types of aerial photographs are as
follows.
 Small Scale Aerial Photograph : When a scale of 1:30,000 or smaller is taken into account to capture an aerial
photograph, then that is called small scale aerial photograph. These photographs cover a large area in less detail.
 Medium Scale Aerial Photograph : If the scale of areal photograph ranges between 1:15,000 and 1:30,000, then the
obtained photograph is referred as a medium scale photograph.
 Large Scale Aerial Photograph : The areal photographs with a scale of 1:15,000 or larger are considered as a large scale
aerial photograph. It covers a small area with great detail than that of the small scale photographs.

3. On the basis of coverage angle


Now taking the coverage angle into the account, there are four types of aerial photographs, viz.,
 Narrow angle areal photograph in which less than 60° is covered by the camera onboard.
 Standard angle aerial photograph. It provides a coverage of to the order of 60°.
 Wide angle aerial photograph covers the angle of around 90°.
 Lastly, the ultra wide angle aerial photograph is that when it covers almost 120° angle.

4. Considering the film and filter of camera


On the basis of the variation of film and filter combination of the camera, there can be seven types of photograph
obtained. These are,
 Panchromatic : Panchromatic refers to the single band sensor, used in the camera for aerial photography. This produces
black and white or grayscale images.
 Colour : The film used in colored aerial photography captures the visible wavelengths separately and produces colorful
images. The obtained photographs are very much helpful to observe surface features and others.
 Infrared : In this case, the film or the bands of camera sensor can detect only the infrared radiation and yields grayscale
images as usual. These imagery are mainly used to study water bodies and vegetation cover.
 Colour-Infrared : Colour-infrared aerial photograph denotes, the camera having the capability to detect both of the
visible and infrared wavelengths.
 Thermal Infrared : Thermal infrared aerial photographs are produced by the camera with the film to capture thermal
energy, that emits from the object under investigation in the form of heat. These photographs are useful for studying
thermal gradient maps and others related.
 Radar : These are the photographs obtained by the aerial photography cameras that can detect only radar or microwaves.
The imagery contain noise and radiometric techniques can be applied to reduce this.
 Spectra-zonal : Spectra-zonal images are obtained, when some specific portions of the electromagnetic spectrum are
captured by the camera sensor.

5. Based on Lens System


Lastly, based on the number of lens used for capturing photographs, or simply following the lens system, aerial
photographs can be classified into many types. Such as,
 Single Lens System : It is the most common as well as general lens system for aerial photography, where, as the name
suggests, the only lens is attached with the camera.
 Multiple Lens System : This includes different lens combination. The notable one is three lens combination, in which
three camera lenses are arranged together for aerial photography. This system is also known as the Trimetrogon Lens
System. Besides this, there are two lens, four lens, nine lens systems also.

GEOMETRIC PRINCIPLES OF PHOTOGRAPHS –

Relief Displacement Displaeement in the position of the image of a ground objeet due to topographie variation (relief) is
ealled relief displacement. It is a eommon phenomenon on all remote sensing data produets, partieularly those of high-
relief terrain. The magnitude of relief displaeement is given as

Relief displaeement ~ r. h/H

where r is the distanee ofthe objeet from the prineipal point, h is the object height and H is the flying height. Therefore,
relief displaeement is dependent upon loeal terrain relief and look angle at the satellite (whieh in turn depends upon
sensoreraft altitude and distanee ofthe ground feature from the nadir point.

The surface of the earth is not smooth and flat. As a consequence, there is a natural phenomenon that disrupts true
orthogonality of photo image features. In this respect, an orthogonal image is one in which the displacement has been
removed, and all of the image features lie in their true horizontal relationship.

Causes of Displacement –

Camera tilt, earth curvature, and terrain relief all contribute to shifting photo image features away from true geographic
location. Camera tilt is greatly reduced or perhaps eliminated by gyroscopically-controlled cameras.
VERTICAL EXAGGERATION AND DISTORTIONS –

VERTICAL EXAGGERATION –

The stereoscopic image perception is an imaging phenomenon , the establishment of which is a psycho -
physiological problem . The proper understanding of the stereoscopic vision and model formation is still not
well established . It is , therefore , impossible to speak in terms of exact parameters in its production , since
the stereoscopic perception varies from person to person . It is a matter of common experience that in
stereoscopic viewing of a stereopair , the optical model does not appear in its natural proportions . The
topographic forms appear much higher and slopes much steeper . This phenomenon is much more perceptible
in large scale photographs with high relief , which is known as vertical exaggeration in aerial photo -
interpretation .

In other words vertical exaggeration is the exaggeration of vertical heights with respect to the horizontal
distance . This element of vertical exaggeration is characteristically present universally in almost all
stereoscopic models .

Effects of Vertical Exaggeration in Aerial Photo interpretation :-

As far as photo - interpretation is concerned , the element of vertical exaggeration does not seriously affect
the stereoscopic model as each point constituting the topography is uniformly exaggerated . Thus , the relative
topographic expression is not disturbed , rather , the vertically exaggerated stereo model produces a much
more prominent model of the terrain . In the region of low relief such as the peneplained areas , vast plains ,
mature and old stages of rivers , fluvio - glacial zones etc. , the vertical exaggeration has proved to be of great
help . In such flatlands , the minor subdued topographic features appear more prominent and thus help in
doing better photo - interpretation . In case of highly eroded terrain of geo logical structures an element of
vertical exaggeration helps in distinguish ing a variety of slopes such as the resequent or obsequent slopes ,
low ridge and valley topography , minor depressions and elevations , which in turn give clue to determine the
structural configuration of rocks .

The problems arise only when the photo - interpretation is required to be quantified in terms of determination
of slope angles , dip of beds , heights of physiographic features etc. The visual measurements give their
exaggerated values . If no correction is made for vertical exagger ation , it may lead to an erroneous result .
The vertical exaggeration is universally present in all stereoscopic models whether paper prints film
diapositives or glass plate diapositives .

Causes of Vertical Exaggeration :-

A proper understanding of vertical exaggeration is of fundamental value in the study of stereoscopic models
because in many geological problems , the photo - interpreter may have to rely on visual dip or slope
estimation . If a pair of aerial photographs could be viewed with a base - height relation which corrospond to
the true relation in aerial photography - the true relation of vertical and horizontal distances would appear in
their true perspective , i.e. no vertical exaggeration in the stereoscopic model . However , it is not possible to
duplicate these conditions of base - height ratio , and therefore , the stereoscopic model appears vertically
exaggerated . The amount of vertical exaggeration of a slope or dip angle in a stereo scopic model is related to
the tangent of the slope or dip angle . For example , if a stereoscopic model has two times the vertical
exaggeration factor , the true slope or dip angle would be exaggerated to an angle whose tangent function
would be equal to two times the tangent of the true slope or dip ( Fig . 6. 1 ) .
For simplicity of explanation , the vertical exaggeration results due to wide spacing of the camera position at
the time of photography , in con trast to the narrow spacing of the human eyes with respect to the normal
viewing arrangement . Thus , the vertical exaggeration is fundamentally related to the base - height ratio ( B /
H ) which is the ratio of the air - base distance to the flying height . As this ratio increases , the vertical exag
geration also increases .

FACTORS AFFECTING VERTICAL EXAGGERATION –

The various factors which influence vertical exaggeration may be kept in two groups - photographic and
stereoscopic , acting independently or in combination . These factors are as follows :

Photographic factors :

1 . Air - base ( B )

2 . Camera height ( H )

3 . Focal length ( f )

Stereoscopic factors :

1 . Photographic Separation ( s )

2 . Vertical viewing distance ( d )

3. Eye - base ( b )

DETERMINATION OF VERTICAL EXAGGERATION –

For a photo - interpreter it is essential that he should always be aware of the fact of vertical exaggeration .
When certain visual estimations have to be made , it becomes rather necessary to determine the vertical exag
geration factor . This means to know as to how many times the stereoscopic model appears exaggerated .
There are several methods for determining vertical exaggeration factor , some of the most commonly used
ones are as follows .

One of the common method of determining vertical exaggeration ratio ( R ) , popularly known as Fichter's
method ( Fichter , 1954 ) is based on the determination of stereoscopic constant ( K ) from standard stereo
scopic charts ; the value of ' R ' is found out from :

R=b.K
f
where,

R = exaggeration ratio ,

b = air base , in scale of photographs , practically assumed as equal to distance between certain points ,

f = focal length of camera ,


b/ f = base ratio ,

K = the stereoscopic constant .

The factor K definitely belongs in the equation , for , otherwise , the numerical relationship would come out
wrong . It is called the Stereoscopic Constant because it has been found to have a constant value in practical
stereoscopic work .

CONCLUSION –

From the aforesaid discussion , it becomes apparent that in practice of aerial photo - interpretation one
cannot get rid of the element of vertical exaggeration . One should develop a practice of being always aware
of the fact of vertical exaggeration being present in all the stereo - models . At times , a very unnatural look of
the terrain may be very deceptive and may lead to an erroneous recognition of geological features and
naturally wrong photo - interpretation . In developing the habit of correct photo interpretation , it would be a
good practice to keep on comparing the stereo - model of the terrain with its corresponding topographic map
as well as the actual topography of the area observed during field visits . Even the field photographs of the
topographic features may be of help in making visual comparison.

DISTORTIONS –
Distortion - Shift in the location of an object that changes the perspective characteristics of the photo.

Lens distortion - Small effects due to the flaws in the optical components (i.e. lens) of camera systems leading to
distortions (which are typically more serious at the edges of photos). Car windows/windshields, carnival mirrors are
probably the best know examples of this type of effect. These effects are radial from the principal point (making objects
appear either closer to, or farther from the principal point than they actually are); and may be corrected using
calibration curves.

Optical Distortion –

The lens distortion of an aerial camera is usually very small and negligible , but it gets noticeable when the
relief is higher and stereoscopic plotting instruments are used . It is considered as an image displacement
radial to the principal point and its value increases with the increasing distance from the principal point . A
typical 9 " x 9 " aerial photograph has about 0.005 inch optical distortion at a distance of 4 inches from the
principal point and 0.006 inch at the extreme corner . The optical distortion may be corrected in the
stereoplanigraph instrument by projecting the picture through a lens , which is a duplicate of the one used for
taking the photograph , thus compensating for optical distortion . In the Multiplex type instrument , the lens
of the diapositive printer has opposite distortion characteristics . In Wild A - 5 plotter the photograph is
viewed through special glass correction plate of varying thickness . A correction graph may also be used which
consists of a series of circles concentric to the principal point , permitting measurement of image coordinates .

Paper and Film Distortion -

These distortions are usually erratic in direction and magnitude and in practice are difficult to correct . The
film distortions are usually smaller , more uniform and systematic than those due to paper . In both cases the
effects are similar . The water content of the material , length of storage time , relative humidity of the
atmosphere in the store , mechanical treatment and hand ling of the films and papers are among the
important factors that deter mine distortions . Paper and cellulose acetate change appreciably with change in
relative humidity but the change is not the same in all directions . Moreover , with return to original humidity
the material may not neces sarily return to the original dimensions . The effects of film and of paper
distortions which are totally unrelated , get mingled on the paper prints For better results , therefore , original
glass plate negatives are preferred for map compilation . The photographic prints mounted on metal plate is
one way to reduce distortion effects . Some special cameras have a glass grid in the focal plane of the camera
which confines distortions to small squares of the grid .

MEASUREMENTS FROM AERIAL PHOTOGRAPHS. SCALE, DISTANCE, AREA AND HEIGHT –

SCALE –

Scale is the ratio of the distance between two points on an image to the actual distance between the same
two points on the ground. Scale is an important describing factor of vertical aerial photography.

Two terms that are normally mentioned when discussing scale are:

Large Scale - Larger-scale photos (e.g. 1:25 000) cover small areas in greater detail. A large scale photo simply
means that ground features are at a larger, more detailed size. The area of ground coverage that is seen on
the photo is less than at smaller scales.

Small Scale - Smaller-scale photos (e.g. 1:50 000) cover large areas in less detail. A small scale photo simply
means that ground features are at a smaller, less detailed size. The area of ground coverage that is seen on
the photo is greater than at larger scales.

The scale ( S ) of a photograph is determined by the focal length of the camera and the vertical height of the
lens above the ground . The focal length ( f ) of the camera , is the distance measured from the centre of the
camera lens to the film . The vertical height of the lens above the ground ( H - h ) is the height of the lens
above sea level ( H ) , minus the height of the ground above sea level ( h ) , when the optical axis in vertical and
the ground is flat . These are related by formula –

S = F
H-h

DISTANCE –

Measuring distance from aerial photographs –

The most simple device for measuring distance from aerial photo graphs is the interpreter's scale . It is made
of transparent plastic and has divisions in both white and black for measuring distances on both dark and light
toned areas . For more accurate measurements a fine graticule can be placed in the eyepiece of a stereoscope.
Both of these devices are used with the assumption that the ground surface is flat but when this is not the
case , these instruments can only be used for very approximate measurements of distance .

AREA –

Measuring area from aerial photographs –

Four methods are commonly used to measure area on aerial photo graphs . These are a transparent overlay ,
a polar planimeter , a table digitiser and an analogue image processor .

The transparent overlay is a fine grid printed onto an acetate sheet to obtain the area of a region , the number
of grid squares covering the region are simply counted up . This method is neither quick nor accurate , but it is
cheap .

A polar planimeter is a mechanical device that consists of a fixed arm , and a wheel with a measuring device .
The wheel is run around the circumference of the region of interest in a clockwise direction . The
circumference of the region can be converted to the area of the region using a look - up table provided by the
manufacturer . This is cheap and fiddly , however , with practice , accurate measurements of area can be
obtained .

A table digitiser is a quick , expensive and fairly accurate method of measuring area . The operator feeds the
co - ordinates of the corners or boundary of the region of interest into a micro - computer which is
programmed to calculate area .

An analogue image processor is by far the most convenient way to measure areas that have a distinct
photographic tone . The image is usually fed into the machine with the aid of a TV camera and a tonal range is
set that discriminates a region of interest . The total area is then read directly from the visual display unit
which is part of the instrument's console. Unfortunately the variability of aerial photographic tones both
within and between similar objects often reduces the accuracy of this technique .

HEIGHT –

HEIGHT MEASUREMENT WITH PARALLAX BAR –

Parallax bar or stereometer is the simplest and most useful instrument for determining height from air photos
. In an area of low relief , the height of an object from a properly oriented pair of aerial photographs can be
calculated by the simple equation :

h = H ΔP
b
where h ' is the height of an object , ' H ' is height of an aeroplane above mean terrain level or datum line , b '
is the photo - base and Ap is the parallax difference . The last two quantities of measurement should
essentially be in the same units either in inches or millimetres . Then ' h ' will have the same unit as the ' H '
which is usually in feet or metres . The ' H ' in the equation can also be replaced by f / S .

UNIT - 2

Preparation of Photo-geologic Maps –

Radial triangulation is one of the very important aspects in photogeological work . This involves a very simple
procedure in rectifying several inherited defects ( scale variation , relief displacement , tilt distortion etc. ) ,
unfavourable for map preparation . The preparation of the geological maps including rock contacts or
outcrops , drainage lines , topographic forms and several other features fundamentally requires an accuracy of
a good survey map . This can be achieved by following the technique of radial triangulation .

After aerial photo - interpretation and geological annotation , the photo markings are required to be
transferred onto the base map . It is always a desirable practice to divide photographs into a number of
triangles following the hand templet method . The sketch - master or stereosketch make this transference
easy , triangle by triangle . If the topographic or planimetric map of the area is already available , then the
aerial photos and their corresponding features on the toposheet ( or planimetric map ) may simply be divided
into appropriate triangles without making use of hand templets . The plotting of geological annotations will be
as accurate as done by the prescribed technique of radial triangulation .

Mosaic –
A mosaic is an assemblage of aerial photographs , the edges of which have been cut and matched to form a continuous
photographic representation of a portion of the earth's surface . Some times the word aerial mosaic is also used to
satisfy its meaning . The mosaic like a single photograph , provides only a two dimensional view of the terrain . Usually
such photo assemblies are preferably prepared by small scale photographs so that a large area can be incorpo rated on a
small size mosaic . An aerial mosaic presents a complete and comprehensive view of the terrain which clearly depicts the
synoptic view of the terrain at a single glance . Such a mosaic study is certainly of great value as the strati graphic ,
structural and topographical continuity can very well be appreciated. Even if a small area has to be studied in greater
details , a mosaic provides an overall set - up of the various geological parameters .

Fundamentally , there are two types of mosaics , a controlled mosaic and an uncontrolled mosaic . Sometimes the term
semi - controlled is also used to indicate a mosaic which has intermediate characters between the two . The procedure
for the preparation of the two types of mosaics is as follows :

( a ) Controlled Mosaic : A controlled mosaic is a compilation of recti fied photographs , so assembled that their principal
points and other selected intermediate points are located in their true horizontal positions . Each photograph is oriented
in position by matching the photographic images of the selected control points to the corresponding plotted position of
the pre - established points .
( b ) Uncontrolled Mosaic :

The work where a very high degree of accu racy is not required , an uncontrolled mosaic can very well serve the
purpose of a good mosaic . An uncontrolled mosaic involves less con trol points and a reasonable degree of accuracy .
However , certain basic controls such as the pre - established horizontal control points and uniform distribution of scale
are the basic requirements . In practice , the photographs are brought to a common scale before the compila tion of a
mosaic . Usually , the central portion of each photograph is taken which is relatively free from relief displacement , tilt
and scale variation . The photographs are laid out in strips in straight lines . The different mosaic strips are then matched
together to compile a mosaic for the entire area .

(c) semi controlled mosaic :

Semicontrolled mosaic is assembled utilizing some combinations of the specifications for controlled and uncontrolled
mosaics .it is a compilation of photographs, without using rectified photographs or utilizing control for positioning of
each photograph. However, the scale and azimuth are within limits.

FLIGHT PLAN –
( 1 ) Purpose of photography : Whether required for a large scale or small scale mapping or for general or detailed photo
- interpretation and that too for what specific purpose such as geology , forestry , or pedology etc.

( 2 ) Area to be photographed : The shape , size , length , width , terrain elevation , strike direction of rock exposures ,
water conditions and vege tation are some of the points of consideration in the preparation of flight mission .

( 3 ) Time of Photography : Deep shadows obscure the terrain details and as such when the sun is too low , the time is
unfavourable for photo graphy . Aerial photography should normally be confined to the period when the Sun - angle is
more than 30 ° but less than half the angular coverage of the lens . The later condition is advisable to avoid mirror like
reflection of light due to haze , which may appear in such photo graphs . Optimum Sun angle is 45 ° , but any departure
from it is only a compromise .

( 4 ) Season of Photography : Good photographic days are limited because of rains , clouds , haze and other natural
phenomena . As such good light condition is a good criterion for aerial photography with a view to photograph the
terrain as clearly as possible . Such conditions in India are just after the rains and before the on - set of summer . Only in
some specialised cases , i.e. forestry , agronomy or glacial studies , aerial photography may be required in a special
season.

INTRODUCTION TO OVERLAP, SIDELAP, DRIFT, CRAB, FIDUCIAL MARKS –

OVERLAP, SIDELAP –
For the stereo view, the aerial photographs have to be shot with sufficient overlap and sidelap to maintain a significant
connection between flight paths (i.e. a connection that is easy to recognize). Generally, aerial photographs are taken
with a 60 % overlap and 30% sidelap.

Vertical aerial photographic coverage of an area is normally taken as a series of overlapping flight strips. As illustrated in
below pictures, the end lap is the overlapping of successive photos along a flight strip and the side lap is the overlap of
adjacent flight srips. Side lap is required in aerial photography to prevent gaps from occurring between flight strips as a
result of drift, crab, tilt, flying height variations and terrain variations. Mapping photography is normally taken with a
side lap of about 30%. An advantage realized from using this large a percentage is elimination of the need to use the
extreme edges of the photography, where the imagery is of poorer quality. Having distortions of image due to tilt and
relief.

Drift and Crab :

In an ideal aerial survey it is intended to take a flight in a straight line to complete a run parallel to the adjacent run (
Figs . 2.4 and 2.5A ) . Unfortunately , at high altitude strong wind currents , called side winds , influence the aeroplane in
maintaining the pre - deter mined direction and straightness of run . If no correction is made by the pilot , the flight path
shall deviate from its original flight line in the direction of wind as shown in Fig . 2.5B . This is known as drift . An
uncorrected drift will give a displaced pattern of photographs in which the subsequent photograph covers more area in
the direction of prevail ing wind at the time of photography . However , if the pilot wants to maintain the pre -
determined direction of flight , he has to turn the nose of the aeroplane slightly against the wind direction . This makes
the acroplane to rotate on its vertical axis . In this side - wind correction , the original flight path is maintained but the
aerial coverage is much different than originally planned , i.e. the aerial coverage by each photo graph is rotated in the
direction opposite to the wind direction . This defect in aerial photography is known as crab ( Fig . 2.5C ) . An element of
drift and crab finally results in reduced stereoscopic coverage in the overlap and side lap .
FIDUCIAL MARKS –

Fiducial Marks : Fiducial marks are index marks rigidly connected with the camera lens , through the camera body and
forming images on the negative which are so adjusted that the intersection of lines drawn bet ween opposite fiducial
marks define the position of principal point of the photograph . The lines joining opposite fiducial marks on a
photograph are called fiducial axes ( Figs . 3.5 and 3.6 ) .

Elements of Interpretation of aerial photographs –

• Photographic Tone

• Photographic Texture

• Shape of the objects


• Size of the objects

• Pattern

• Scale of photographs

• Vertical Exaggeration

Photographic Tone –

• Measure of relative amount of light reflected by an object and recorded on the photograph.

• It refers to the relative brightness or colour of objects on an image.

Photographic tone influenced by –

• Reflectivity of an object • Angle of the reflected light • Geographic latitude • Type of photography and film sensitivity
• Light transmission of filters • Photographic processing

Photographic Texture –

• Signifies the frequency of change and arrangement of tones in a photographic image

• Texture is produced by an aggregation of unit features

• It determines the overall visual “smoothness” or “coarseness”

• Texture distinguish two objects with same tone.

• Texture is dependent on the scale of aerial photograph. As the scale is reduced the texture progressively becomes
finer and ultimately disappears.

Coarse texture: Clustered objects with rounded crowns

Medium texture: Scattered objects

Fine Texture: Thick object density with small crown

Smooth Texture: Regular dispersion of uniform objects

Rough Texture: Irregularly dissipated object

Rippled Texture: Develop due to water wave over shallow water surface

Mottled Texture: Pitted outwash plain

Granular Texture: Poor and loosely scattered objects

Shape of object –

• Qualitative statement referring to the general form, configuration or outline of an object


• Certain geomorphic features are identified directly based on their shape of not much eroded
• Eg. Folds, Linear intrusives, Massive intrusive

Size of object –

• Size of an object is a function of photo scale, and considered in combination of size and shape of object.

• The sizes can be estimated by comparing them with objects whose sizes are known.

• Eg. Small storage shed vs Mine pit

Pattern –

• Pattern is the spatial arrangement of objects and gives genetic relation.

• The orderly repetition of aggregate features in certain geometric or planimentric arrangements.

• Ex. Fold pattern, drainage pattern, outcrop and lithological pattern.

UNIT – 3

Electro-Magnetic spectrum. Space platforms. Reflectance of minerals. Vegetation, rocks and water. Elementary idea
about active and passive sensors. Introduction to IRS mission

Electromagnetic Spectrum –

The fundamental unit of electromagnetic phenomena is the photon, the smallest possible amount of electromagnetic
energy of a particular wavelength. Photons, which are without mass, move at the speed of light—300,000 km/sec
(186,000 miles/sec) in the form of waves analogous to the way waves propagate through the oceans. The energy of a
photon determines the frequency (and wavelength) of light that is associated with it. The greater the energy of the
photon, the greater the frequency of light and vice versa.

The electromagnetic spectrum (EMS) includes wavelengths of electromagnetic radiation ranging from short wavelength
(high frequency) gamma rays .

The entire array of electromagnetic waves comprises the electromagnetic (EM) spectrum. The waves are called
electromagnetic because they consist of combined electric and magnetic waves that result when a charged particle
(electron) accelerates. The EM spectrum has been arbitrarily divided into regions or intervals to which descriptive names
have been applied. At the very energetic (high frequency; short wavelength) end are gamma rays and x-rays. Radiation
in the ultraviolet region extends from about 1 nanometer to about 0.36 micrometers. It is convenient to measure the
mid-regions of the spectrum in these two units: micrometers (µm), a unit of length equivalent to one-millionth of a
meter, or nanometers (nm), a unit of length equivalent to one-billionith of a meter. The visible region occupies the range
between 0.4 and 0.7 µm, or its equivalents of 400 to 700 nm. The infrared (IR) region, spans between 0.7 and 100 µm. At
shorter wavelengths (near .7 µm) infrared radiation can be detected by special film, while at longer wavelengths it is felt
as heat.

Longer wavelength intervals are measured in units ranging from millimeters (mm) through meters (m). The microwave
region spreads across 1 mm to 1 m; this includes all of the intervals used by man-made radar systems, which generate
their own active radiation directed towards (and reflected from) targets of interest. The lowest frequency (longest
wavelength) region—beyond 1 m—is associated with radio waves.

Visible part of ems –

The visible light spectrum is the segment of the electromagnetic spectrum that the human eye can view. More simply, this range of
wavelengths is called visible light. Typically, the human eye can detect wavelengths from 380 to 700 nanometers. As the full spectrum
of visible light travels through a prism, the wavelengths separate into the colors of the rainbow because each color is a different
wavelength. Violet has the shortest wavelength, at around 380 nanometers, and red has the longest wavelength, at around 700
nanometers.

Space platforms –

. Space-borne Platform
Space-borne platforms denote the spacecrafts or satellites or space shuttles, holding the remote sensing sensors. These orbit the Earth
at an altitude of about 250 to 36,000 kilometres from the Earth surface.
The unique advantage of space-borne platforms are, it covers a large area of the earth surface at a time, and literally it allows to
photograph the whole Earth surface. Besides this remote sensing cameras in the space-borne platforms can capture the image of the
study are repeatedly with a certain time frequency. The mechanism involved in the space-borne platforms are semi-automated type.

The space-borne platforms can be classified into two types, viz.

 Unmanned Platform : These are the space-borne platforms in which the sensors of remote sensing are not controlled manually by the
astronauts. Rather sensors automatically carry out the remote sensing process. Some instances of such unmanned space-borne
platforms include IRS satellites, NavIC Constellation, GPS Constellation, NASA's Landsat series satellites, and many more.

 Manned or Crewed Platform : In the manned or crewed space-borne platforms, the remote sensing instruments are operated
manually by astronauts. They carry forward various crucial experiments being manually present in the space-borne platforms. A bright
example of manned space-borne platform includes the International Space Station.

Reflectance of minerals –

Rocks, like soils, are single scatterer and exhibit relatively simple spectral properties. Unlike soils, rock reflectance is less
dependent on water content and completely independent of organic matter content, texture or structure. Rock spectral
reflectance primarily depends on their mineral composition.

Low reflecting minerals such as goethite (an ore of iron) have spectral properties similar to soil surfaces - low to
moderate reflectance at visible wavelengths increasing into the NIR.

High reflecting minerals, such as quartz (silicate mineral) and calcite (calcium carbonate), exhibit almost uniformly high
reflectance throughout the visible, NIR and SWIR spectrum. Differences between high reflectance minerals occur at
specific wavelengths and these absorption features are known as diagnostic absorption features.

Rocks are similar to soils in reflectance, which is not surprising since soils are derived from weathered rocks. One major
difference between the two is the organic matter present in soils, which tends to decrease reflectance.

Reflectance of vegetation –

In case of vegetation, reflection of green light is due to the presence of the chlorophyll pigment in plant leaves.
Presence of the chlorophyll pigment results in unique spectral signature of vegetation that enables us to distinguish it
easily from other types of land cover (non-living) features in an optical/near-infrared image. The reflectance of
vegetation is low in both the blue and red regions of the EM spectrum, due to absorption of blue and red wavelengths
by chlorophyll for photosynthesis. It has a peak reflectance at the green region that gives green colour to vegetation. In
the near infrared (NIR) region, the reflectance is much higher than that in the visible band due to the cellular structure in
the leaves. Hence, vegetation can be easily identified in the NIR region of spectrum.

Reflectance of vegetation changes according to the composition, maturity and health of vegetation. The amount of
chlorophyll content determines the health of vegetation. Chlorophyll strongly absorbs radiation in the red and blue
wavelengths but reflects green wavelengths. Leaves appear ‘greenest’ to us when chlorophyll content is at its maximum.
In certain seasons, there is less chlorophyll in the leaves; so, there is less absorption and proportionately more reflection
of the red wavelengths, making the leaves appear red or yellow (yellow is a combination of red and green wavelengths).

Reflectance of water –

In shallow water, some of the radiation is reflected not by the water itself but from the bottom of the water body.
Therefore, in shallow pools and streams, it is often the underlying material that determines the water body’s reflectance
properties. Longer wavelengths of visible and NIR regions of EMR absorb more by water than shorter visible
wavelengths. Thus water typically looks blue or blue-green due to stronger reflectance at these shorter wavelengths,
and darker if viewed at red or NIR wavelengths. If there is suspended sediment present in the upper layers of the water
body, then this will allow better reflectivity and a brighter appearance of the water.

The apparent colour of the water will show a slight shift to longer wavelengths. Suspended sediment can be easily
confused with shallow (but clear) water, since these two phenomena appear very similar. Chlorophyll in algae absorbs
more of the blue wavelengths and reflects the green, making the water appear greener in colour in the presence of
algae. The topography of the water surface (rough, smooth, floating materials, etc.) can also lead to complications for
water-related interpretation due to potential problems of specular reflection and other influences on colour and
brightness.

Elementary idea about active and passive sensors –

In the active system , a specific part of the electromagnetic energy is produced from an artificial source projected
towards an area of interest . The reflected energy is then recorded for study . Radar is a good example of an active
sensor .

The passive system does not depend on artificial energy source . All energy is derived from the sun . Most of the
commonly used aerial photography such as panchromatic , infra - red and colour uses this energy falling on the surface
of the Earth

Active sensors –

Each active sensor in remote sensing directs its signal to the object and then checks the response – the received
quantity. The majority of devices employ microwaves since they are relatively immune to weather conditions. Active
remote sensing techniques differ by what they transmit (light or waves) and what they determine (e.g., distance, height,
atmospheric conditions, etc.).

Radar is a sensor assisting in ranging with radio signals. Its specific feature is the antenna emitting impulses. When the
energy flow in radar active remote sensing meets an obstacle, it scatters back to the sensor to some degree. Based on its
amount and traveling time, it is possible to estimate how far the target is.

Lidar determines distance with light. Lidar active remote sensing implies transmitting light impulses and checking the
quantity retrieved. The target location and distance are understood by multiplying the time by the speed of light.

Laser altimeter measures elevation with lidar.

Ranging instruments estimate the range either with one or two identical devices on different platforms sending signals
to each other.

Sounder studies weather conditions vertically by emitting impulses, in case it falls to the active category.

Scatterometer is a specific device to measure bounced (backscattered) radiation.

Passive sensor –

Passive remote sensing depends on natural energy (sunrays) bounced by the target.Passive remote sensing employs
multispectral or hyperspectral sensors that measure the acquired quantity with multiple band combinations. These
combinations differ by the number of channels (two wavelengths and more). The scope of bands includes spectra within
and beyond human vision (visible, IR, NIR, TIR, microwave).
Passive Remote Sensing Devices -

The most popular passive remote sensing examples of devices are various types of radiometers or spectrometers.

Instrument names clearly identify what they measure:

Spectrometer distinguishes and analyzes spectral bands.

Radiometer determines the power of radiation emitted by the object in particular band ranges (visible, IR, microwave).

Spectroradiometer finds out the power of radiation in several band ranges.

Hyperspectral radiometer operates with the most accurate type of passive sensor that is used in remote sensing. Due to
extremely high resolution, it differentiates hundreds of ultimately narrow spectral bands within visible, NIR and MIR
regions.

Imaging radiometer scans the object or a surface to reproduce the image.

Sounder senses the atmospheric conditions vertically.

Accelerometer detects changes in speed per unit of time (e.g., linear or rotational).

Introduction to IRS mission –

Indian Remote Sensing Satellites (IRS) are a series of Earthobservation satellites, built, launched and maintained by
Indian Space Research Organization (ISRO). • The IRS provides many remote sensing services to India. • The IRS system is
the largest constellation of remote sensing satellites for civilian use in operation today in the world. • Starting with IRS-
1A in 1988, ISRO has launched many operational remote sensing satellites. • Currently, 13 operational satellites are in
Sun-synchronous orbit and 4 in Geostationary orbit.

• The IRS program started in the mid 1980s. • IRS data is used for the observation and management of the country's
natural resources applications in agriculture, hydrology, geology, drought and flood monitoring, snow studies, and land
use etc. • To utilize the Earth’s resources in more meaningful ways. • A continuous supply of synoptic, repetitive,
multispectral data of the Earth's land surfaces is obtained. • The initial program of Earth-surface imaging was extended
by the addition of sensors for complementary environmental applications.

India's remote sensing program was developed with the idea of applying space technologies for the benefit of
humankind and the development of the country. The program involved the development of three principal capabilities.
The first was to design, build and launch satellites to a Sun-synchronous orbit. The second was to establish and operate
ground stations for spacecraft control, data transfer along with data processing and archival. The third was to use the
data obtained for various applications on the ground.

India demonstrated the ability of remote sensing for societal application by detecting coconut root-wilt disease from a
helicopter mounted multispectral camera in 1970. This was followed by flying two experimental satellites, Bhaskara-1 in
1979 and Bhaskara-2 in 1981. These satellites carried optical and microwave payloads.

India's remote sensing programme under the Indian Space Research Organization (ISRO) started off in 1988 with the IRS-
1A, the first of the series of indigenous state-of-art operating remote sensing satellites, which was successfully launched
into a polar Sun-synchronous orbit on March 17, 1988, from the Soviet Cosmodrome at Baikonur.
It has sensors like LISS-I which had a spatial resolution of 72.5 metres (238 ft) with a swath of 148 kilometres (92 mi) on
ground. LISS-II had two separate imaging sensors, LISS-II A and LISS-II B, with spatial resolution of 36.25 metres (118.9 ft)
each and mounted on the spacecraft in such a way to provide a composite swath of 146.98 kilometres (91.33 mi) on
ground. These tools quickly enabled India to map, monitor and manage its natural resources at various spatial
resolutions. The operational availability of data products to the user organisations further strengthened the relevance of
remote sensing applications and management in the country.

UNIT – 4

Multispectral scanners (MSS). Thematic Mappers (TM). Linear imaging self scanning (LISS). Elementary idea about image
processing concept of Geographic Information System (GIS).

Multispectral scanners (MSS) –

The Multi Spectral Scanner ( MSS ) is a line scanning device which simultaneously scans the terrain passing beneath the
spacecraft or the aeroplane . The function of the scanner is to produce different synchron ous images each at a different
wave band . The Earth Resources Tech nology Satellites ( ERTS r Landsat 1 , 2 and 3 ) are fitted with MSS which have
oscillating mirrors that scan the Earth's surface below the moving satellite . Natural energy reflections or radiations
coming from the surface of the Earth and its atmosphere are reflected by the mirror into a reflecting telescope and
focussed on fibre optic bundles located in the focal plane of the telescope . Radiation is conducted by the fibre optic
light pipes to filters that permit only certain wavelengths of radia tion to strike the detectors . The voltage produced by
each detector is related to the amount of radiation that reaches the detector . Each detector is capable of producing
voltage from zero to five volts . The voltage produced by the detectors is an analog signal which is converted into values
( from 0 to 63 ) by a multiplexer . Thus , 24 detectors are used on the MSS to record the six lines of data in four
wavelength bands .
Thematic Mappers (TM) –

A Thematic Mapper (TM) is one of the Earth observing sensors introduced in the Landsat program. The first was placed
aboard Landsat 4 (decommissioned in 2001), and another was operational aboard Landsat 5 up to 2012.

The Thematic Mapper (TM) is an advanced, multispectral scanning, Earth resources Sensor designed to achieve higher
image Resolution, sharper spectral separation, improved geometric fidelity and greater radiometric accuracy and
resolution than the MSS Sensor.

TM sensor records reflected and emitted electromagnetic energy from the visible, reflective-infrared, middle-infrared,
and thermal-infrared regions of the spectrum.

TM sensors feature seven bands of image data (three in visible wavelengths, four in infrared) most of which have 30
meter spatial resolution. TM is a whisk broom scanner which takes multi-spectral images across its ground track. It does
not directly produce a thematic map.

TM Technical Specifications -

Sensor type: opto-mechanical

Spatial Resolution: 30 m (120 m – thermal)

Spectral Range: 0.45 – 12.5 µm

Number of Bands: 7

Temporal Resolution: 16 days

Image Size: 185 km X 172 km

Swath: 185 km

Programmable: yes

Linear imaging self scanning (LISS) –

Pdf available.

Elementary idea about image processing –

Digital Image Processing is the manipulation of the digital data with the help of the computer hardware and software to
produce digital maps in which specific information has been extracted and highlighted.

Many image processing and analysis techniques have been developed to aid the interpretation of remote sensing
images and to extract as much information as possible from the images.

Pre-processing –
Remotely sensed raw data generally contains flaws and deficiencies received from imaging sensor mounted on the
satellite. The correction of deficiencies and removal of flaws present in the data through some methods are termed as
pre–processing methods.
All the pre–processing methods are consider under three heads, namely,

a) Radiometric correction method b) Atmospheric correction method c) Geometric correction methods

a) Radiometric correction method –

Radiometric corrections are also called as cosmetic corrections and are done to improve the visual appearance of the
image. Some of the radiometric distortions are as follows:

1. Correction for missing lines 2. Correction for periodic line striping 3. Random noise correction

b) Atmospheric correction method –

The atmosphere has effect on the measured brightness value of a pixel. Other difficulties are caused by the variation in
the illumination geometry. Atmospheric path radiance introduces haze in the imagery where by decreasing the contrast
of the data.

c) Geometric correction methods –

The transformation of the remotely sensed image into a map with the scale and projection properties is called geometric
corrections.

Image Enhancement –

In order to aid visual interpretation, visual appearance of the objects in the image can be improved by image
enhancement techniques such as grey level stretching to improve the contrast and spatial filtering for enhancing the
edges.

Image Classification –

Different landcover types in an image can be discriminated usingsome image classification algorithms using spectral
features, i.e. the brightness and "colour" information contained in each pixel. The classification procedures can be
"supervised" or"unsupervised".

In supervised classification, the spectral features of some areas of known landcover types are extracted from the image.
These areas are known as the "training areas". Every pixel in the whole image is then classified as belonging to one of
the classes depending on how close its spectral features are to the spectral features of the training areas.

In unsupervised classification, the computer program automatically groups the pixels in the image into separate clusters,
depending on their spectral features. Each cluster will then be assigned a landcover type by the analyst.

Each class of landcover is referred to as a "theme" and the product of classification is known as a "thematic map".

Image Filtering –

Spatial filtering is the process of dividing the image into its constituent spatial frequency and selectively altering certain
spatial features. This technique increases the analyst‟s ability to discriminate details. The three types of spatial filters
used in remote sensor data processing are:

1. Low Pass Filters, 2. Band Pass Filters, and 3. High Pass Filters.

Image Transformation – All the transformations in an image based on arithmetic operations. The resulting images
may will have properties that make them more suited for a particular purpose than the original.
concept of Geographic Information System (GIS) – pdf available.

OVERVIEW OF GIS –

A Geographical Information System (GIS) is a system for capturing, storing, analyzing and managing data and associated
attributes, which are spatially referenced to the Earth. The geographical information system is also called as a
geographic information system or geospatial information system. It is an information system capable of integrating,
storing, editing, analyzing, sharing, and displaying geographically referenced information. In a more generic sense, GIS is
a software tool that allows users to create interactive queries, analyze the spatial information, edit data, maps, and
present the results of all these operations. GIS technology is becoming essential tool to combine various maps and
remote sensing information to generate various models, which are used in real time environment. Geographical
information system is the science utilizing the geographic concepts, applications and systems.

Geographical Information System can be used for scientific investigations, resource management, asset management,
environmental impact assessment, urban planning, cartography, criminology, history, sales, marketing, and logistics. For
example, agricultural planners might use geographical data to decide on the best locations for a location specific crop
planning, by combining data on soils, topography, and rainfall to determine the size and location of biologically suitable
areas. The final output could include overlays with land ownership, transport, infrastructure, labour availability, and
distance to market centers.

GIS Workflow –

GIS Workflow depends on the following components:

3.1 User –

It includes technical experts, managers and administrators who are responsible for the day-to-day operations in GIS.
Peoples from various disciplines are being used GIS as a tool that allow them to perform their task more accurately. For
example town planner use GIS in town planning and academician use GIS to teach and research etc.

3.2 Information –

It is most important and expensive component of GIS that can be divided into following two categories.

3.2.1 Spatial data –

It is the data which is available in raster or image form that comprises geographic location, shape, size, orientation and
boundaries of features present on earth surface such as lake, forest, mountain, town boundary etc. It is also known as
geospatial data.

3.2.2 Tabular data –

It is also known as non-spatial or attribute data. It has information related to spatial features that is described in details
for example India is divided into various state, natural regions, religion and population etc. are tabular data that is
arranged in tabular form because these variables are independent of the India's location.

3.3 Methodology / Procedure –

Methodology is the detail workflow that includes data preparation, data manipulation, data analysis and final result.
Procedure is the defined way that is used to analyse the data and produce better result. It includes guidelines, standard
and protocol that is very important component for project work of an organisation.
3.4 Hardware –

It is play an important role in GIS environment because GIS work depend on huge datasets especially on satellite images
that need more memory and fast processing for data manipulation and data analysis. Some important hardware
component that used in GIS is given below.

3.4.1 Visual Display Unit (VDU) / Monitor –

It is an electronic device that is used to see the information comes out from computers.

3.4.2 Keyboard –

It is a device which is being used for instruction, commands and data entry. Keyboard is used to entered attribute data
and manual digitizing is used for entering spatial information from maps and images in GIS.

3.4.3 Mouse –

It is a wonderful pointing device used for data selection, editing, zoom in and zoom out and creating the spatial data
base in GIS.

3.4.4 Central Processing Unit (CPU) –

The CPU is the brain of computer and known as processor - It is an electronic circuit that execute computer program.

3.4.5 Scanner –

A scanner is an electronic device that captures data from photographs, images and other sources for computer editing
and display.

3.4.6 Printer / Plotter –

A printer / plotter is an electronic device that accepts text and graphic output from a computer and transfers the
information to paper.

3.4.7 Internet –

It is a source for information and data collection from worldwide.

3.5 Software –

GIS software is the set of functions and tools that comprises input module, editing module, analysis module and
modelling capability which allow us to solve problems spatially. Some GIS software's are ArcGIS(ESRI), QGIS(Open
source), MapInfo(Pitney Bowes) etc.
UNIT – 5

Applications of photo Geology and Remote sensing in the study of Geomorphology, Lithology and Structural Features
and Hydrogeologic studies.

Application of Remote Sensing In Geomorphology

In the realm of Earth science, geomorphology has reaped the benefits of technological advances,
particularly the application of remote sensing. The utilization of remote sensing in geomorphology provides
a panoramic perspective, offering an in-depth understanding of the Earth’s physical attributes. This
technology has profoundly influenced numerous studies, from analyzing landforms to assessing
geohazards.

1. Landform Classification

Remote sensing enables accurate landform classification by analyzing topographic variations and
surface characteristics. By employing digital elevation models and multispectral imagery, researchers
can identify and classify different landforms such as mountains, valleys, plateaus, and coastal
features.

2. Change Detection

Monitoring landscape changes is essential for understanding dynamic processes. Remote sensing
facilitates change detection by comparing images captured at different time intervals. This technique
aids in identifying erosion, deposition, vegetation growth, and other transformative events occurring in
geomorphic environments.

3. Slope Stability Analysis

Assessing slope stability is crucial for predicting landslides and mitigating associated risks. Remote
sensing provides valuable data for slope stability analysis by mapping terrain features, analyzing land
cover patterns, and identifying potential instabilities based on surface characteristics and vegetation
distribution.

4. Hydrological Studies

Remote sensing plays a pivotal role in hydrological studies by examining water bodies, river networks,
and precipitation patterns. It aids in understanding the flow dynamics, water quality, flood mapping,
and monitoring changes in hydrological systems over time.

5. Glacial Studies
Studying glaciers and ice sheets is essential for climate change research. Remote sensing techniques
like synthetic aperture radar (SAR) and thermal infrared imaging help monitor glacial retreat, mass
balance, and ice dynamics, providing critical data for understanding the impacts of global warming.

6. Coastal Zone Management

Coastal areas are prone to erosion, sea-level rise, and other geomorphic changes. Remote sensing
assists in coastal zone management by mapping shorelines, monitoring coastal erosion, tracking
sediment transport, and assessing the vulnerability of coastal ecosystems to climate-related hazards.

7. Geohazard Assessment

Remote sensing aids in geohazard assessment by identifying potential hazards such as earthquakes,
volcanic eruptions, and landslides. It enables the monitoring of active fault lines, volcanic activity, and
surface deformation, contributing to early warning systems and disaster management strategies.

8. Archaeological Studies

Remote sensing plays a vital role in archaeological studies by identifying buried archaeological
features and mapping ancient landscapes. Techniques like aerial photography, LiDAR, and
multispectral imaging assist in uncovering hidden sites, monitoring archaeological sites’ conditions,
and aiding in cultural heritage preservation.

9. Urban Planning

Remote sensing supports urban planning by providing accurate spatial data for land use mapping,
infrastructure development, and urban growth monitoring. It helps urban planners assess environmental
impacts, plan transportation networks, and make informed decisions regarding sustainable urban
development.

10. Vegetation Analysis

Assessing vegetation dynamics is crucial for ecological studies. Remote sensing allows for the
monitoring of vegetation health, species distribution, and biomass estimation. By analyzing spectral
reflectance and vegetation indices, researchers can track changes in vegetation patterns and detect
stress-induced alterations.

11. Soil Erosion Mapping


Soil erosion poses significant challenges to land management and agricultural practices. Remote
sensing assists in soil erosion mapping by identifying vulnerable areas, quantifying soil loss, and
evaluating the effectiveness of erosion control measures. This data aids in developing sustainable land
management strategies.

12. Geological Mapping

Remote sensing techniques facilitate geological mapping by providing detailed information about rock
types, geological structures, and lithological variations. It aids in identifying mineral deposits, mapping
fault lines, and understanding the geological history of an area.

Conclusion

Remote sensing has revolutionized the field of geomorphology, enabling scientists to explore and
understand Earth’s landscapes like never before. By harnessing the power of various imaging
techniques, remote sensing has proven invaluable in applications such as landform classification,
change detection, slope stability analysis, hydrological studies, glacial research, coastal zone
management, geohazard assessment, archaeological studies, urban planning, vegetation analysis, soil
erosion mapping, and geological mapping.

You might also like