Introduction To Radar (3590) PDF
Introduction To Radar (3590) PDF
Introduction To Radar (3590) PDF
CONTENTS
CHAPTER 1: INTRODUCTION
1.1 ABOUT LRDE
1.2 ABOUT RADAR
1.3 OBJECTIVE
1.4 REPORT ORGANIZATION
2.4.1 TRANSMITTER
2.4.2 DUPLEXER
2.4.3 RECEIVER
2.4.4 ANTENNA
2.5 APPLICATION
4.4 APPLICATIONS 44
REFERENCES
List of Figures
Name of the figures Page No.
Fig. 1.1 Block diagram of RADAR . 06
Fig. 2.1 Fig. 2.1: A half-wave dipole antenna radiating radio waves 10
Fig. 2.2 Schematic diagram of a RADAR 11
Fig. 3.4 Building up a radar image using the motion of the platform 23
Fig. 4.6 Echo spectrum and associated range(delay) profile for a single burst
Fig. 4.7(a) A stationary radar and a rotating target is equivalent to (b) a spotlight SAR
Fig. 4.9 ISAR data arranged in 2-D range cells and pulses domain
Fig. 4.10 Aligned range profiles after applying envelope cross correlation
Fig. 4.11 Phase function at a range cell (a) before phase adjustment and (b) after phase adjustment
Fig. 4.13 Relationship between the radar range profiles in the spatial domain
Fig. 4.14
Fig. 4.2 Estimation of angular velocity and acceleration from received data. 43
Fig. 4.14 ISAR image of a fighter plane after applying min. entropy compensation 56
Fig. 4.15 ISAR image of a single point after applying min. entropy compensation 57
Fig. 4.16 Spectrogram of range cell(A fighter plane) 57
Fig. 4.17Spectrogram of range cell(A single point) 58
Fig. 4.18 A hypothetical target composed of (a) perfect point scatterer, 60
(b)A single point 60
Fig. 4.19 Conventional ISAR image of aeroplane target with translational and rotational motion
61
Fig. 4.20 Conventional ISAR image of a single point target with translational and rotational motion
62
Fig. 4.21 Spectrogram of range cell 62
Fig. 4.22 ISAR image of the aeroplane target after translational motion compensation 63
Fig. 4.23 ISAR image of the a single point target after translational motion compensation 63
Fig. 4.24 Spectrogram of time pulses(non compensated) of (a)fighter (b) single point 64
Fig. 4.25 ISAR image of a fighter after applying min. entropy compensation 65
Fig. 4.26 ISAR image of a single point after applying min. entropy compensation 65
Fig. 4.21 Spectrogram of time pulses(compensated) 66
CHAPETR 1
INTRODUCTION
RADAR is an acronym for RAdio Detection And Ranging and it is used to gather information
about a target location, speed, direction, shape, identity or simply presence-by processing the reflected
radio frequency (RF) or microwave signals. In its basic operation, a transmitter sends out a signal
which is partly reflected by a distant target, and then detected by a sensitive receiver. If a narrow beam
antenna is used, the target’s direction can be accurately provided by the position of the antenna. The
distance to the target is determined by the time required for the signal to travel to the target and back
and the radial velocity of the target is related to the doppler shift of the return signal.
Fig. 1.1 shows the block diagram of RADAR system. Basically it consists of transmitter, receiver,
duplexer and antenna.
1.3 OBJECTIVE
CHAPTER 2
LITERATURE REVIEW
Radar is an object-detection system that uses radio waves to determine the range, angle, or
velocity of objects. It can be used to detect aircraft, ships, spacecraft, guided missiles, motor vehicles,
weather formations, and terrain. The RADAR system generally consists of a transmitter which
produces an electromagnetic signal which is radiated into space by an antenna. When this signal strikes
any object, it gets reflected or reradiated in many directions. This reflected or echo signal is received
by the radar antenna which delivers it to the receiver, where it is processed to determine the
geographical statistics of the object. The range is determined by the calculating the time taken by the
signal to travel from the RADAR to the target and back. The target’s location is measured in angle,
from the direction of maximum amplitude echo signal, the antenna points to. To measure range and
location of moving objects, Doppler Effect is used.
Neither a single nation nor a single person can say that the discovery and development of radar
technology was his (or its) own invention. One must see the knowledge about “Radar” than an
accumulation of many developments and improvements, in which any scientists from several nations
took part in parallel. In the past, there are nevertheless some milestones, with the discovery of
important basic knowledge and important inventions:
1865, The Scottish physicist James Clerk Maxwell presents his “Theory of the Electromagnetic Field”
(description of the electromagnetic waves and their propagation) He demonstrated that electric and
magnetic fields travel through space in the form of waves, and at the constant speed of light.
1886, The German physicist Heinrich Rudolf Hertz discovered electromagnetic waves, thus
demonstrating the Maxwell theory.
1897, The Italian inventor Guglielmo Marconi achieved the first long distance transmission of
electromagnetic waves. In his first experiments he used a wire to a wooden pole. In Italian a tent pole
is known as l'antenna centrale, and the pole with a wire alongside it used as an aerial was simply called
l'antenna. Today Marconi is known as pioneer of radio communication.
1900, Nicola Tesla suggested that the reflection of electromagnetic waves could be used for detecting
of moving metallic objects.
1904, The German engineer Christian Hülsmeyer invents the "telemobiloscope" for a traffic
monitoring on the water in poor visibility. This is the first practical radar test. Hülsmeyer apply his
invention for a patent in Germany, France and the United Kingdom.
1921,The invention of the Magnetron as an efficient transmitting tube by the US-American physicist
Albert Wallace Hull.
1922, The American electrical engineers Albert H. Taylor and Leo C. Young of the Naval Research
Laboratory (USA) locate a wooden ship for the first time.
1930, Lawrence A. Hyland (also of the Naval Research Laboratory), locates an aircraft for the first
time.
1931, In Britain the first known proposal for a radar system came from William A. S. Butement and
P. E. Pollard in January 1931. They equipped a ship with radar. As antennae were used parabolic dishes
with horn radiator. Although their equipment produced short-range results the work was abandoned
for lack of government support.
1933, On the basis of the in 1931 from himself invented sonar, Rudolph Kühnhold presented a so
called “Funkmessgerät”. It worked on a wavelength of 48 cm and the transmitter had a power of about
40 Watts. From these tests, the Freya-radar was developed, which was produced in series beginning in
1938.
1935, Robert Watson-Watt (later: Sir Robert) suggested that radio waves might be used to detect
aircraft at a distance and outlined a means of doing so. Intensive research began and by
1939, Britain possessed a defensive chain of highly secret Radio Direction Finding (RDF) stations.
1936,The development of the Klystron by the technicians George F. Metcalf and William C. Hahn,
both General Electric. This will be an important component in radar units as an amplifier or an
oscillator tube.
1939, Two engineers from the university in Birmingham, John Turton Randall und Henry Albert
Howard Boot built a small but powerful radar using a Multicavity-Magnetron. The B–17 airplanes
were fitted with this radar. Now they could find and thus combat the German submarines in the night
and in fog.
1940, Different radar equipments are developed in the USA, Russia, Germany, France and Japan.
Driven by general war events and the development of the Air Force to major branch of service, the
radar technology undergoes a strong development boost during the World War II, and radar sets were
used during the "Cold War" in large numbers along the inner German border.
Radio waves are a type of electromagnetic radiation with wavelengths in the electromagnetic
spectrum longer than infrared light. Radio waves have frequencies as high as 300 GHz to as low as
3 kHz. They are used for fixed and mobile radio communication, broadcasting, radar and other
navigation systems.
1. WORKING
2.3Principle of RADAR
The electronic principle on which radar operates is very similar to the principle of sound-wave
reflection. If you shout in the direction of a sound-reflecting object (like a rocky canyon or cave), you
will hear an echo. If you know the speed of sound in air, you can then estimate the distance and general
direction of the object. The time required for an echo to return can be roughly converted to distance if
the speed of sound is known.
Radar uses electromagnetic energy pulse. The radio-frequency (RF) energy is transmitted to and
reflected from the reflecting object. A small portion of the reflected energy returns to the radar set.
This returned energy is called an ECHO, just as it is in sound terminology. Radar sets use the echo to
determine the direction and distance of the reflecting object.
2.4.1Transmitter
Transmitter may be an oscillator tube such as magnetron oscillator which produces pulse
modulated sine wave carrier. The magnetron oscillator is most widely used in various microwave
generators for radar. The waveform generated by the transmitter travels via a transmission line to the
antenna where it is radiated into space.
2.4.2 Duplexer
The function of the duplexer is to protect the receiver from damage caused by the high power
of the transmitter. The duplexer also serves to channel the returned echo signals to the receiver and
not to the transmitter. The duplexer might consist of two gas discharge devices, one known as TR
(Transmit - Receive) and the other an ATR (Anti Transmit - Receive). The TR is employed during
transmission and the ATR directs the echo signal to the receiver during reception. Solid state ferrite
circulators and receiver protectors with gas plasma TR devices and/or diode limiters are also employed
as duplexers
.2.4.3Waveguides
A waveguide is an electromagnetic feed line used in microwave communications, broadcasting,
and radar installations. The waveguides are transmission lines for transmission of the RADAR signals.
A waveguide consists of a rectangular or cylindrical metal tube or pipe. The electromagnetic field
propagates lengthwise.
2.4.4 Receiver
The receiver is usually of the super heterodyne type. The first stage is a low
noise RF amplifier, such as parametric amplifier or a low noise FET amplifier. The receiver input is
fed through a mixer stage. Although a receiver with a low noise front end will be more sensitive, the
mixer input can have a greater dynamic range, less susceptibility to overload, and less vulnerability to
electronic interference. The mixer and local oscillator convert the RF signal to an Intermediate
Frequency (IF). The IF amplifier should be designed as a ‘matched filter’ i.e. its frequency response
H(f) should maximize the peak signal to mean noise power ratio at the output. Once the signal to noise
ratio in the IF amplifier is maximized, the pulse modulation is extracted by the second detector and
amplified by the video amplifier.
2.4.5 Antenna
The common form of the radar antenna is the reflector with a parabolic shape, fed from a point
source at its focus. The parabolic reflector focuses the energy into a narrow beam. The beam may be
scanned in space by mechanical pointing of the antenna. Phased array antennas have also been used
for radar. In a phased array the beam is scanned by electronically varying the phase of the currents
across the aperture.
2.4.6 Signal Processing Hardware
(A)Radar Signal Processor
The signal processor is that part of the system which separates targets from clutter on the basis of
Doppler content and amplitude characteristics. In modern radar sets the conversion of radar signals to
digital form is typically accomplished after IF amplification and phase sensitive detection. At this stage
they are referred to as video signals, and have a typical bandwidth in the range 250 KHz to 5 MHz.
The Sampling Theorem therefore indicates sampling rates between about 500 KHz and 10 MHz. Such
rates are well within the capabilities of modern analogue-to-digital converters (ADCs).
The signal processor includes the following components:
The I&Q Phase Detector,
The Moving Target Indication and,
The Constant False Alarm Rate detection.
(B) Threshold Decision
Radar threshold is a parameter that affects radar performance directly by causing a tradeoff
between detection and false alarm probability. While designing radar systems, it must be set accurately
in order to have reliable decisions about target detection. The output of the receiver is compared with
a threshold to detect the presence of any object. If the output is below any threshold, the presence of
noise is assumed.
3. APPLICATIONS
The modern uses of radar are highly diverse. They include Air traffic control, Air-defense systems,
and Antimissile systems.
Nautical Radars: it is used to locate landmarks and other ships; ocean-surveillance systems, outer
space surveillance, etc.,
Aviation Radars: Aircrafts are equipped with radar devices that warn of obstacles in or
approaching their path and give accurate altitude readings
Marine Radars: They are used to measure the bearing and distance of ships to prevent collision
with other ships, to navigate and fix their positions at sea when within range of shore or other fixed
references such as islands, buoys and light ships
Weather-sensing Radars: It is an important tool in weather forecasting and helps make the
forecasts more accurate
Detection and search Radar: It is the “early warning radar”, which is used for long-range
detection of objects
Target Acquisition (TA) Radar systems: It is used to locate surface-to-air-missiles (SAM). These
types of radar are often used in the military and in coastal surveillance, as well as for detecting car
speed in high way patrol.
Missile Guidance Systems: This radar is used to locate the target of missile often present in
Military aircraft
Radar for Biological Research: Bird and Insect radar are used frequently by scientists to track
the migration patterns of animals. Bird radar is also being used in NASA’s Kennedy Space Center
in Florida to track the presence of birds, especially Vultures, near launching pads.
Air traffic control and navigation Radar: This radar is used by airport to ensure the safety of
planes. This type of radar detects the proximity of an aircraft and identifies the identity and altitude
of the plane.
Remote sensing :All radars are remote sensors and used for sensing geophysical objects (the
environment). The radar astronomy are used to probe the moon and planets. The earth resources
monitoring radars measure and map sea conditions, water resources, ice cover, agricultural land
use, forest conditions, geological formations, environmental pollution.
Law enforcement :The radar speed meter, familiar to many is used by police for enforcing speed
limits. Radar has been considered for making vehicles safer by warning of ponding collision,
actuating the air bag, or warning of obstructions or people behind a vehicle or in the side blindzone.
It is also employed for detection of intruders.
CHAPTER 3
TYPES OF RADARS
regardless of whether or not they possess a transponder. The operator hears the echoes from any
reflection. Therefore it performs transmission/listening continuously, which covers the space 360
°. The primary radar functions therefore results in detection and measurements of position if there
is the presence of a target by the recognition of the useful signal.
A primary radar measurement include:
the distance D based on the wave transit time on the path to / from;
an angle θ based on the position of a directional antenna in azimuth;
radial velocity using the Doppler effect
It can be said that a radar locate a flying object on a quarter circle in the vertical plane, but cannot
know exactly its altitude if it is using fan beam antenna. This information must obtained by
triangulation of several radars in that case. However, with a 3D radar this data is obtain by using
either a cosecant squared pattern or a scanning on multiple angles with a pencil beam.
USAGE
The rapid wartime development of radar had obvious applications for air traffic control(ATC) as a
means of providing continuous surveillance of air traffic disposition. Precise knowledge of the
positions of aircraft would permit a reduction in the normal procedural separation standards, which
in turn promised considerable increases in the efficiency of the airways system.
This type of radar (now called a primary radar) can detect and report the position of anything that
reflects its transmitted radio signals including, depending on its design, aircraft, birds, weather and
land features. For air traffic control purposes this is both an advantage and a disadvantage. Its
targets do not have to co-operate, they only have to be within its coverage and be able to reflect
radio waves, but it only indicates the position of the targets, it does not identify them.
When primary radar was the only type of radar available, the correlation of individual radar returns
with specific aircraft typically was achieved by the controller observing a directed turn by the
aircraft. Primary radar is still used by ATC today as a backup/complementary system to secondary
radar, although its coverage and information is more limited.
based on the military identification friend or foe(IFF) technology originally developed during
World war Ⅱ, therefore the two systems are still compatible. Monopulse secondary surveillance
radar (MSSR) is similar modern methods of secondary surveillance.
The need to be able to identify aircraft more easily and reliably led to another wartime radar
development, the identification friend or foe(IFF) system, which had been created as a means of
positively identifying friendly aircraft from unknowns. This system, which became known in civil
use as secondary surveillance radar (SSR), or in the USA as the air traffic control radar beacon
system(ATCRBS), relies on a piece of equipment aboard the aircraft known as a "transponder."
The transponder is a radio receiver and transmitter pair which receives on 1030 MHz and transmits
on 1090 MHz. The target aircraft transponder replies to signals from an interrogator (usually, but
not necessarily, a ground station co-located with a primary radar) by transmitting a coded reply
signal containing the requested information.
Both the civilian SSR and the military IFF have become much more complex than their war-time
ancestors, but remain compatible with each other, not least to allow military aircraft to operate in
civil airspace. Today's SSR can provide much more detailed information, for example, the aircraft
altitude, as well as enabling the direct exchange of data between aircraft for collision avoidance.
Most SSR systems rely on Mode C transponders, which report the aircraft pressure altitude. The
pressure altitude is independent from the pilot's altimeter setting, thus preventing false altitude
transmissions if altimeter is adjusted incorrectly. Air traffic control systems recalculate reported
pressure altitudes to true altitudes based on their own pressure references, if necessary.
Given its primary military role of reliably identifying friends, IFF has much more secure
(encrypted) messages to prevent "spoofing" by the enemy, and is used on many types of military
platforms including air, sea and land vehicles.
BASIC WORKING
The purpose of SSR is to improve the ability to detect and identify aircraft while automatically
providing the flight level (pressure altitude) of an aircraft. An SSR ground station transmits
interrogation pulses on 1030 MHz (continuously in Modes A, C and selectively, in Mode S) as its
antenna rotates, or is electronically scanned, in space. An aircraft transponder within line-of-sight
range 'listens' for the SSR interrogation signal and transmits a reply on 1090 MHz that provides
aircraft information. The reply sent depends on the interrogation mode. The aircraft is displayed as
a tagged icon on the controller's radar screen at the measured bearing and range. An aircraft without
an operating transponder still may be observed by primary radar, but would be displayed to the
controller without the benefit of SSR derived data. It is typically a requirement to have a working
transponder in order to fly in controlled air space and many aircraft have a back-up transponder to
ensure that condition is met.
suspected. For this purpose, tracking radars use special search patterns, such as helical, T.V. raster,
cluster, and spiral patterns, to name a few.
Here the radar is dedicated to the tracking function. A surveillance radar might provide co-
ordinates for the tracker radar. In this way a single target is tracked only. These radars may also
use the older method of conical scan.
Typical continuous tracker radar characteristics include a very high pulse repetition frequency
(PRF), a very narrow pulse width, and a very narrow beam width. These characteristics, while
providing extreme accuracy, limit the range and make initial target detection difficult.
Radar target tracking is to use the radar to observe and analyze the locked target, obtain the speed,
location and other information of the target, and establish the corresponding dynamic model for
the movement state of the target. Then the computer predicts and evaluates these information of
the target at the next moment through a series of filtering methods to accurately, so as to establish
the target trajectory. Target tracking algorithm is an important part of radar data processing, and
its basic principle is shown in Figure.
When the target maneuvers, the value of the residual V in the measurement will increase, this time
maneuvering detection is based on the change of V. Then the computer determines the filter gain,
the covariance matrix and other parameters, the filter outputs, thus completing the function for
tracking maneuvering target.
An imaging radar works very like a flash camera in that it provides its own light to illuminate an
area on the ground and take a snapshot picture, but at radio wavelengths. A flash camera sends out
a pulse of light (the flash) and records on film the light that is reflected back at it through the
camera lens. Instead of a camera lens and film, a radar uses an antenna and digital computer tapes
to record its images. In a radar image, one can see only the light that was reflected back towards
the radar antenna.
A typical radar (RAdio Detection and Ranging) measures the strength and round-trip time of the
microwave signals that are emitted by a radar antenna and reflected off a distant surface or object.
The radar antenna alternately transmits and receives pulses at particular microwave wavelengths
(in the range 1 cm to 1 m, which corresponds to a frequency range of about 300 MHz to 30 GHz)
and polarizations (waves polarized in a single vertical or horizontal plane). For an imaging radar
system, about 1500 high- power pulses per second are transmitted toward the target or imaging
area, with each pulse having a pulse duration (pulse width) of typically 10-50 microseconds (us).
The pulse normally covers a small band of frequencies, centered on the frequency selected for the
radar. Typical bandwidths for an imaging radar are in the range 10 to 200 MHz. At the Earth's
surface, the energy in the radar pulse is scattered in all directions, with some reflected back toward
the antenna. This backscatter returns to the radar as a weaker radar echo and is received by the
antenna in a specific polarization (horizontal or vertical, not necessarily the same as the transmitted
pulse). These echoes are converted to digital data and passed to a data recorder for later processing
and display as an image. Given that the radar pulse travels at the speed of light, it is relatively
straightforward to use the measured time for the roundtrip of a particular pulse to calculate the
distance or range to the reflecting object. The chosen pulse bandwidth determines the resolution in
the range (cross-track) direction. Higher bandwidth means finer resolution in this dimension.
In the case of imaging radar, the radar moves along a flight path and the area illuminated by the
radar, or footprint, is moved along the surface in a swath, building the image as it does so.
Fig. 3.4 Building up a radar image using the motion of the platform
The length of the radar antenna determines the resolution in the azimuth (along-track) direction of
the image: the longer the antenna, the finer the resolution in this dimension. Synthetic Aperture
Radar (SAR)refers to a technique used to synthesize a very long antenna by combining signals
(echoes) received by the radar as it moves along its flight track. Aperture means the opening used
to collect the reflected energy that is used to form an image. In the case of a camera, this would be
the shutter opening; for radar it is the antenna. A synthetic aperture is constructed by moving a real
aperture or antenna through a series of positions along the flight track.
As the radar moves, a pulse is transmitted at each position; the return echoes pass through the
receiver and are recorded in an 'echo store.' Because the radar is moving relative to the ground, the
returned echoes are Doppler-shifted (negatively as the radar approaches a target; positively as it
moves away). Comparing the Doppler-shifted frequencies to a reference frequency allows many
returned signals to be "focused" on a single point, effectively increasing the length of the antenna
that is imaging that particular point. This focusing operation, commonly known as SAR processing,
is now done digitally on fast computer systems. The trick in SAR processing is to correctly match
the variation in Doppler frequency for each point in the image: this requires very precise knowledge
of the relative motion between the platform and the imaged objects (which is the cause of the
Doppler variation in the first place).
Synthetic aperture radar is now a mature technique used to generate radar images in which fine
detail can be resolved. SARs provide unique capabilities as an imaging tool. Because they provide
their own illumination (the radar pulses), they can image at any time of day or night, regardless of
sun illumination. And because the radar wavelengths are much longer than those of visible or
infrared light, SARs can also "see" through cloudy and dusty conditions that visible and infrared
instruments cannot.
What is a radar image?
Radar images are composed of many dots, or picture elements. Each pixel (picture element) in the
radar image represents the radar backscatter for that area on the ground: darker areas in the image
represent low backscatter, brighter areas represent high backscatter. Bright features mean that a
large fraction of the radar energy was reflected back to the radar, while dark features imply that
very little energy was reflected. Backscatter for a target area at a particular wavelength will vary
for a variety of conditions: size of the scatterers in the target area, moisture content of the target
area, polarization of the pulses, and observation angles. Backscatter will also differ when different
wavelengths are used.
Scientists measure backscatter, also known as radar cross section, in units of area (such as square
meters). The backscatter is often related to the size of an object, with objects approximately the
size of the wavelength (or larger) appearing bright (i.e. rough) and objects smaller than the
wavelength appearing dark (i.e. smooth). Radar scientists typically use a measure of backscatter
called normalized radar cross section, which is independent of the image resolution or pixel size.
Normalized radar cross section (sigma0.) is measured in decibels (dB). Typical values of sigma0.
for natural surfaces range from +5dB (very bright) to -40dB (very dark).
A useful rule-of-thumb in analyzing radar images is that the higher or brighter the backscatter on
the image, the rougher the surface being imaged. Flat surfaces that reflect little or no microwave
energy back towards the radar will always appear dark in radar images. Vegetation is usually
moderately rough on the scale of most radar wavelengths and appears as grey or light grey in a
radar image. Surfaces inclined towards the radar will have a stronger backscatter than surfaces
which slope away from the radar and will tend to appear brighter in a radar image. Some areas not
illuminated by the radar, like the back slope of mountains, are in shadow, and will appear dark.
When city streets or buildings are lined up in such a way that the incoming radar pulses are able to
bounce off the streets and then bounce again off the buildings (called a double- bounce) and directly
back towards the radar they appear very bright (white) in radar images. Roads and freeways are
flat surfaces so appear dark. Buildings which do not line up so that the radar pulses are reflected
straight back will appear light grey, like very rough surfaces.
Backscatter is also sensitive to the target's electrical properties, including water content. Wetter
objects will appear bright, and drier targets will appear dark. The exception to this is a smooth
body of water, which will act as a flat surface and reflect incoming pulses away from a target; these
bodies will appear dark.
Backscatter will also vary depending on the use of different polarization. Some SARs can transmit
pulses in either horizontal (H) or vertical (V) polarization and receive in either H or V, with the
resultant combinations of HH (Horizontal transmit, Horizontal receive), VV, HV, or VH.
Additionally, some SARs can measure the phase of the incoming pulse (one wavelength = 2pi in
phase) and therefore measure the phase difference (in degrees) in the return of the HH and VV
signals. This difference can be thought of as a difference in the roundtrip times of HH and VV
signals and is frequently the result of structural characteristics of the scatterers. These SARs can
also measure the correlation coefficient for the HH and VV returns, which can be considered as a
measure of how alike (between 0/not alike and 1/alike) the HH and VV scatterers are.
Different observations angles also affect backscatter. Track angle will affect backscatter from very
linear features: urban areas, fences, rows of crops, ocean waves, fault lines. The angle of the radar
wave at the Earth's surface (called the incidence angle) will also cause a variation in the backscatter:
low incidence angles (perpendicular to the surface) will result in high backscatter; backscatter will
decrease with increasing incidence angles.
A Synthetic Aperture Radar (SAR), or SAR, is a coherent mostly airborne or space borne side looking
radar system which utilizes the flight path of the platform to simulate an extremely large antenna or
aperture electronically, and that generates high-resolution remote sensing imagery. Over time,
individual transmit/receive cycles are completed with the data from each cycle being stored
electronically. The signal processing uses magnitude and phase of the received signals over successive
pulses from elements of a synthetic aperture. After a given number of cycles, the stored data is
recombined (taking into account the Doppler effects inherent in the different transmitter to target
geometry in each succeeding cycle) to create a high resolution image of the terrain being over flown.
SAR Working
The SAR works similar of a phased array, but contrary of a large number of the parallel antenna
elements of a phased array, SAR uses one antenna in time-multiplex. The different geometric positions
of the antenna elements are result of the moving platform now.
The SAR-processor stores all the radar returned signals, as amplitudes and phases, for the time period
T from position A to D. Now it is possible to reconstruct the signal which would have been obtained
by an antenna of length v · T, where v is the platform speed. As the line of sight direction changes
along the radar platform trajectory, a synthetic aperture is produced by signal processing that has the
effect of lengthening the antenna. Making T large makes the „synthetic aperture” large and hence a
higher resolution can be achieved.
As a target (like a ship) first enters the radar beam, the backscattered echoes from each transmitted
pulse begin to be recorded. As the platform continues to move forward, all echoes from the target for
each pulse are recorded during the entire time that the target is within the beam. The point at which
the target leaves the view of the radar beam some time later, determines the length of the simulated or
synthesized antenna. The synthesized expanding beam width, combined with the increased time a
target is within the beam as ground range increases, balance each other, such that the resolution remains
constant across the entire swath.
The achievable azimuth resolution of a SAR is approximately equal to one-half the length of the actual
(real) antenna and does not depend on platform altitude (distance).
exact knowledge of the flight path and the velocity of the platform.
Using such a technique, radar designers are able to achieve resolutions which would require real
aperture antennas so large as to be impractical with arrays ranging in size up to 10 m.
Applications:
CHAPTER 4
Inverse synthetic aperture radar (ISAR) is a radar technique using Radar imaging to generate a two-
dimensional high-resolution image of a target. It is analogous to conventional Synthetic aperture
radar(SAR), except that ISAR technology utilizes the movement of the target rather than the emitter
to create the synthetic aperture. ISAR radars have a significant role aboard maritime patrol aircraft to
provide them with radar image of sufficient quality to allow it to be used for target recognition
purposes. In situations where other radars display only a single unidentifiable bright moving pixel, the
ISAR image is often adequate to discriminate between various missiles, military aircraft, and civilian
aircraft.
ISAR is utilized in maritime surveillance for the classification of ships and other objects. In these
applications the motion of the object due to wave action often plays a greater role than object rotation.
For instance a feature which extends far over the surface of a ship such as a mast will provide a high
sinusoidal response which is clearly identifiable in a two dimensional image. Images sometimes
produce an uncanny similarity to a visual profile with the interesting effect that as the object rocks
towards or away from the receiver the alternating doppler returns cause the profile to cycle between
upright and inverted. ISAR for maritime surveillance was pioneered by Texas Instruments in
collaboration with the Naval Research Laboratory and became an important capability of the P-3 Orion
and the S-3B Viking US Navy aircraft.
Research has been done also with land based ISAR. The difficulty in utilizing this capability is that
the object motion is far less in magnitude and usually less periodic than in the maritime case.
SAR techniques are often used to synthesize a large antenna aperture from small apertures. Synthetic
aperture processing coherently combines signals obtained from sequences of small aperture at different
viewing angles to emulate the result that would be obtained using a large antenna aperture. Coherent
processing maintains the relative phases of successive transmitted signals and thus retains both the
amplitude and the phase information about the target. A large aperture is synthesized by mounting the
radar on a moving platform, generally an aircraft or a satellite, although other carriers, such as
helicopters and ground-based rails, have been employed. SAR techniques are often used to synthesize
a large antenna aperture from small apertures.
27 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
Synthetic aperture processing coherently combines signals obtained from sequences of small aperture
at different viewing angles to emulate the result that would be obtained using a large antenna aperture.
Coherent processing maintains the relative phases of successive transmitted signals and thus retains
both the amplitude and the phase information about the target. A large aperture is synthesized by
mounting the radar on a moving platform, generally an aircraft or a satellite, although other carriers,
such as helicopters and ground-based rails, have been employed.
The most common modes operated in SAR are the strip-map mode and the spotlight mode. In the strip-
map mode, the antenna beam bears on a fixed direction relative to the moving direction of the platform
as illustrated in Figure 1.1. When the platform moves, an area strip is swept over. If the antenna
direction is off the perpendicular of the flight path, it is referred to as squinted strip-map SAR. The
strip-map mode can generate wide-area maps of the terrain. The length of the imaged area is
determined by the length of the data collection, and the azimuth resolution in the along-track direction
is determined by the antenna length, that is, the dimension along the flight direction. It should be noted
that, after correcting for the range migration, the azimuth can be identified as the cross-range and the
azimuth resolution becomes the cross-range resolution.
In the spotlight mode, the antenna has a narrower beamwidth and points to the same small patch area
when the physical aperture moves through the length of the synthetic aperture, as shown in Figure 1.1.
This mode typically generates images of smaller scenes at a finer resolution. The azimuth resolution
is determined by angular variation spanned during the formation of the synthetic aperture, and the size
of the imaged area is determined by the antenna beamwidth. To reconstruct the radar image of a target
from a sequence of returned signals, it is required that each returned signal must be obtained with a
different view of the target. Thus, a relative rotation between the radar and the target is necessary for
creating different aspect angles of the target, such that each radar transmitted signal will capture a
different view of the target.
Now, we should pay attention to the relative motion between the radar platform and the target. It means
the motion is not necessarily produced by a moving platform. If the radar is stationary and the target
moves with respect to it, an improvement in cross-range resolution can be obtained. To emphasize the
concept of relative motion, one could argue that whether the configuration called stationary target and
moving platform or moving target and stationary platform really depends on where the reference
coordinate system is placed: the former occurs by placing the reference system on the target and the
latter by placing the reference system on the radar. According to this view, the differences between
SAR and ISAR would depend only on where the reference system is placed. Such a concept is depicted
in Figure 1.2, where a spotlight SAR configuration is transformed into an ISAR configuration by
moving the reference system from the target to the radar.
Conversely, the same concept may be argued by starting with a controlled ISAR configuration, such
as the turntable experiment. In the turntable configuration, the radar antenna is fixed on the ground
(typically mounted on a turret), and the target is placed on a rotating turntable, as depicted in Figure
1.3a. By moving the reference system from the radar to the target, a circular SAR geometry can be
enabled, as depicted in Figure 1.3b. Where the reference system is placed determines the type of
configuration (i.e., SAR or ISAR configuration), but in practice a subtle yet significant detail exists
that substantially defines the difference between SAR and ISAR.
This difference depends not on reference system placement (this may be arbitrary to avoid affecting
the system physically) but on the target’s cooperation. To better explain this concept, one may place
the reference system on the target. If such a target moves with respect to the radar and with unknown
motion parameters, also called non cooperative target, the synthetic aperture formed during the
coherent processing interval differs from that produced by an expected controlled motion of the
platform, such as the turntable experiment. Thus, a SAR image formation that follows would be based
on an erroneously predicted synthetic aperture and lead to the formation of a de-focussed image. A
pictorial example of a synthetic aperture formed by a non-cooperative target’s motion is shown in
Figure 1.4a, where the unknown and non-cooperative motion of the target generates an unpredictable
synthetic aperture.
At this stage, it is necessary to say that the detail of whether the target is cooperative presents a number
of issues. They are strictly related to the reason that ISAR is important to determine the relative motion
between the radar and the target and, thus, to form radar images of non-cooperative targets. In fact, the
synthetic aperture formed by an unknown and arbitrary target’s motion with respect to the radar is also
unknown. This means that the positions of the synthesized aperture elements are not known a priori.
Since SAR image processing is based on such knowledge, we should say that any SAR image
formation algorithm may not be successfully applied to form a focused radar image of a non-
cooperative target. It is also worth pointing out that in cases where both the radar and the target are
moving, if the target’s motion is unknown by the radar, the SAR image processing would fail again.
Generally speaking, in cases where the radar platform is stationary and a target’s motion is non-
cooperative, ISAR image processing algorithms should be applied instead of using SAR algorithms.
It will be shown in the remainder of this book that ISAR processing is specifically designed to handle
radar imaging of non-cooperative targets. Because no prior information about the target’s motion and,
thus, about the synthetic aperture is taken into account, ISAR imaging can be regarded as a blind
version of SAR imaging, where the target’s motion parameters must be estimated in the process of
forming a focused target image.
The target’s translational motion is defined as the movement of the target along the range axis (or
RLOS axis) of the radar. The target’s translational motion is one of the most significant components
that affects image quality in the ISAR image. The main effect of the target’s translational motion is
shifting the positions of the scatterers on the target along the range axis. This is because of the fact that
target’s radial distance changes for consecutive radar pulses as they are sent at different time instants
while the target is moving. Therefore, the phase of the collected electric field data is misaligned along
the pulses so that the Doppler frequencies that are used to estimate the exact locations of the target are
spread out over a finite number of range cells. When the Fourier transform is directly applied to the
collected data that contain translational motion, the location of the point scatterer is poorly estimated
since the scatterer is visible for all of that finite number of range cells. Therefore, the scatterers look
as if they “walk” over the range cells. The range walk phenomena can negatively affect the range
resolution, range accuracy, and signal-to-noise ratio of the resulting ISAR image. Therefore, the
target’s resultant image before the compensation may be smeared in the cross-range direction and
defocused in range and cross-range directions. The amount of smearing, of course, depends on the
amount of target’s radial motion (or the radial velocity). Although there may be little or no smearing
effects for small radial velocity values as in the case of slowly moving ship targets, the image smearing
can be drastic for fast moving targets such as fighter airplanes. Usually, an algorithm is applied to
overcome the range walk issue by trying to align the range bins. The common name for keeping the
scatterers in their range cells is range tracking.
listed as follows: dominant scatterer algorithm, sub-aperture approach, cross-range centroid tracking
algorithm, phase gradient autofocus technique, and multiple PPP technique.
f͞ is the carrier or centre frequency of the radar. λ is the wavelength and c is the propagation velocity.
We will initially assume that fD is the constant during the viewing angle change that occurs during a
small integration time T.
For a radar that has doppler frequency resolution of delta fD , we have a cross range resolution given
by
The cross range resolution delta rc can be seen to be dependent on the resolvable difference in the
doppler frequencies from two scatterers in the same slant range cell.
The slant range resolution for ISAR is obtained by using wideband waveforms. Regardless of the type
of waveform, the achievable range resolution is approximately is c/2β, where β is the waveform
bandwidth. Two types of pulses are used - chirp pulse and stepped frequency while chirp pulse and
stretch waveforms are the most common for SAR. Stepped frequency waveforms have been found to
be useful for ISAR when the application requires extreme resolution. Rayleigh resolution for chirp
waveform is
Fig 4.6 - Echo spectrum and associated range (delay) profile for a single burst.
Synthetic processing of stepped frequency waveforms, requires the conversion of echo data, collected
in the frequency domain, into synthetic range profiles.
To generate an ISAR range-Doppler image of a moving target, if the target’s effective rotation angle is greater
than the angle [(λ/LLOS1/2)/2] (where λ is the wavelength, and LLOS is the projected target size along the radar
LOS), the generated range-Doppler image will be defocused in the range domain. However, in the Doppler
domain, based on (1.30), the Doppler shift, fD, of any scatterer on the target is determined by the rotation rate,
Ω, the scatterer’s cross-range displacement from the center of rotation ,rc, and the wavelength ,λ,
Fig 4.7 – (a) A stationary radar and a rotating target is equivalent to (b) a spotlight SAR imaging of a
stationary target.
Where
And
The instantaneous range and rotation angle can be written in terms of target motion history:
where the translational motion parameters are initial range R0, velocity v0, and accelerationa0, and the angular
rotation parameters are initial angle q0, angular velocity Ω0, and angular acceleration γ0. If these translational
motion parameters can be accurately estimated, the extraneous phase term can be completely removed. Thus,
the target’s reflectivity density function ρ(x, y) can be reconstructed exactly by taking a 2-D inverse Fourier
transform. Thus, for ISAR range-Doppler image formation, the first step is to carry out the translational motion
compensation (TMC). It estimates the target’s translational motion parameters and removes the extra phase
term, such that the target’s range is no longer varying with time. Then, by taking the Fourier transform along
the pulses (slow-time) domain, the range-Doppler image of the target can be reconstructed. In many cases,
however ,the target may also rotate about an axis. The rotational motion can make Doppler shifts to be time
varying; thus, by using the Fourier transform the reconstructed ISAR image will besmeared in the Doppler
domain. In this case, the RMC must be carried out to correct for the rotation. After taking the TMC and RMC,
ISAR range-Doppler images can be correctly reconstructed by a 2-D Fourier transform. Therefore, the
previously described image formation is typically addressed as the ISAR range-Doppler image formation. To
reconstruct an ISAR range-Doppler image, we first take range compression to obtain ISAR range profiles, after
which we apply TMC to remove target’s translational motion. The common process of TMC includes two
stages: range alignment and phase adjustment. If the target has more significant rotational motion during the
CPI time, the formed ISAR range-Doppler image can still be unfocused and smeared due to the rotation-induced
time varying Doppler spectrum. In these cases, additional image-focusing algorithms for correcting\ rotation
errors, such as the PFA, must be applied. After removing translational and rotational motion, ISAR range-
Doppler image is finally generated by taking the Fourier transform in the pulses domain. To display ISAR image
in the range and cross-range domain, cross-range scaling is also needed to convert Doppler shift to cross-range
domain.
generate pulse compressed range profiles. Dynamic range is an important specification in radar receivers. It is
defined by the ratio between the maximum and minimum values of the capable received signal intensity and is
formulated by
Where Amax and Amin are the maximum and minimum intensity values in linear scales, respectively. For example,
to get 60 dB dynamic range, the ratio between the maximum and minimum values of capable received signal
intensity (Amax/Amin) must be 1000.
Fig 4.9 – ISAR data arranged in 2-D range cells and pulses domain
Fig 4.10 – Aligned range profiles after applying envelope cross correlation
The range-cell alignment process is usually implemented by aligning a strongest magnitude peak in each range
profiles. An envelope cross-correlation method between two range profiles is commonly used to estimate the
range cell shift between two profiles. Figure 3.8ashows that, after the range alignment, the ISAR range profiles
in Figure 3 become aligned. Figure 4b is the phase function at range cell no. 50 after the range alignment, which
is still nonlinear. Thus, a phase adjustment procedure is required. Aligning the range can also cause phase drifts.
Figure 3.9a shows a resulting nonlinear phase function across pulses at a selected range cell caused by the range
alignment process. To remove the phase drifts and makes linear phase functions at range cells where the target
occupies, we must apply a phase adjustment processing, called fine motion compensation. In ISAR range-
Doppler images Doppler shifts are induced by the target’s rotation. If a target rotates too fast or the CPI is too
long, after range alignment and phase adjustment Doppler frequency shifts can still be time varying. In these
cases, the final reconstructed ISAR range-Doppler image can still be smeared. Therefore, we must correct the
result due to fast rotation of the target. PFA is a well-known technique for compensating for rotational motion
[5].PFA is based on the tomography developed in medical imaging, which has been used to reconstruct a spatial
domain object. To do this, we must take a series of observations through the object. According to the projection
slice theorem, observation is a projection of the object onto a line. Thus, by applying the Fourier transform to a
set of observations over a series of aspect angle, the series of observations populates a region of Fourier space
on the right side of the figure. This projected data surface is used to reconstruct an image of the object through
the inverse Fourier transform. A 2-D Fourier transform of a spatial function f(x, y) is defined as
Fig 4.11 – Phase function at the range cell (a) before phase adjustment and (b) after phase adjustment, when it
becomes linear.
Fig 4.13– Relationship between the radar range profiles in the spatial domain and the projected radar data
surface in the fourier domain
For a projection of f (x, y) at an angle q, its Fourier transform becomes a slice line through F(u, v) at an angle θ.
Because a radar-received pulse signal can be seen as the projection of electromagnetic scattering from an object
onto the radar line of sight (LOS), the PFA is suitable for radar image formation. If the angle of a radar LOS is
θ , then the projection of a target f (x, y) onto t becomes a projected range profile. On the other hand, in the
Fourier domain, Fourier transform of the radar signal, F(u, v), will produce a line segment that offsets from the
origin (u=0,v=0) by the amount of carrier frequency and with the same angle q as the radar LOS. The length of
the line segment is determined by the bandwidth of the radar signal. When radar LOS angle sweeps, the swept
line segments become the projected radar data surface in the Fourier space. The relationship between the radar
range profiles in the spatial domain and the projected radar data surface in the Fourier domain. In principle, the
ISAR PFA is similar to the spotlight SAR PFA. However, in ISAR the target’s aspect angle is changed by the
target’s motion and thus is unknown and uncontrollable. In spotlight SAR, the radar motion determines the
aspect angle. Because the aspect change defines the data surface, the target rotation must be estimated before
applying the PFA in ISAR. To implement the PFA in ISAR, we must measure target motion parameters from
the received radar data for modeling the data surface, projecting the data surface onto a planer surface,
interpolating the data into equally spaced samples, and performing the inverse Fourier transform.
Applications:
Maritime surveillance: Maritime surveillance aircraft commonly use ISAR systems to detect,
image and classify surface ships and other objects in all weather conditions. Because of different
radar reflection characteristics of the sea, the hull, superstructure, and masts as the vessel moves
on the surface of the sea, vessels usually stand out in ISAR images. There can be enough radar
information derived from ship motion, including pitching and rolling, to allow the ISAR operator
to manually or automatically determine the type of vessel being observed.
Imaging Objects in Space: Another ISAR (also called “delayed Doppler”) application is the use
of one or more large radio telescopes to generate radar images of objects in space at very long
ranges.
Detecting enemy air-crafts: In defense organisations, the ISAR radar is used to detect and locate
enemy air-crafts & fighters and thereafter damage them to protect the region.
CHAPTER 5
Motion compensation of ISAR can be carried out in two steps. The first step is range realignment in
which the high resolution range profiles are aligned in the range direction by placing the returns of
different pulses from the same scatterer in the same range cell. It is a coarse compensation of
translational motion. The second step is phase compensation, which removes the residual translational
motion by multiplying the range aligned signals with the conjugate phase of a selected reference point.
Phase compensation is the fine compensation of translational motion. It is usually called autofocus
with the reference point being termed the focal point. Many ISAR autofocus methods have appeared.
A simple approach to ISAR autofocus is to choose as the reference point a range cell containing a
strong scatterer. For a complex target that does not have a stable prominent scatterer, an estimate of
the pulse-to-pulse phase difference of the reference point can be made by taking the phase differences
for each range cell and averaging them weighted by the amplitudes of the content of each range cell.
An alternative is to average the phase differences in the range cells where only strong scatterers exist.
However phase unwrapping is crucial for these approaches because phase averaging is needed for
estimating the phase of the translational motion. Another method, based on image contrast, has also
been proposed recently for ISAR autofocus. In these approaches, many images are produced with
different focusing parameters. One that produces the best image contrast is selected as the optimal
focusing parameter. However the computational load of these approaches is expensive. Phase
unwrapping may be appropriate in a low-noise environment in which the amplitude of the signal never
approaches zero. However in the more realistic high-noise environment, the phase unwrapping may
become ambiguous. In this report, we develop some approaches for ISAR auto-focusing which obviate
phase unwrapping by estimating the complex exponential signal vector whose phase corresponds to
the translational motion rather than the phase itself. After the complex signal vector is estimated, ISAR
autofocus can be conducted by compensating all the range profiles with the complex signal vector.
This paper considers ISAR autofocus as a problem of array processing and solves it from the
perspective of array calibration. Four new approaches based on array processing theory for estimating
the complex vector of translational motion for ISAR auto-focusing are developed. The first and second
approaches make use of conventional and optimum beamforming concepts. The third and fourth
approaches use signal and noise subspaces of an estimated covariance matrix respectively. They have
a computational advantage over the image contrast method as numerous images with different focusing
parameters to calculate image contrast need not to be produced.
To form Inverse Synthetic Aperture Radar (ISAR) imagery the radar imaging system has
to compensate for translational motion of the target. Typically, this is done either in a two step
approach by performing range alignment and then non-parametric autofocus or by parametric
joint range alignment and autofocus algorithms. Most of the parametric techniques model the phase
error induced by the motion of the target as a polynomial function of sufficient order (typically
allowing for velocity and acceleration error). They then employ image domain measures
of image focus(such as contrast or entropy) and recursively estimate the parameters of the
model until maximum focus is reached. Non-parametric approaches often start from the
assumption that range aligned high range resolution (HRR) profiles have been generated via some
coarse range alignment technique .The autofocus step then estimates the phase-error caused by the
translational motion compensation without assuming a model of the target’s motion, since the
range alignment process can induce higher order errors. Such techniques include Phase gradient
autofocus (PGA) (developed for focusing of spotlight SAR) and the minimum-variance technique. It
is clear that model based approaches are not suited to cases where the phase error does not fit the
model chosen .Conversely, non-parametric techniques might not converge in high clutter or very-
low SNR conditions. Ideally an ‘ISAR toolbox’ thus has to contain both approaches. A common
factor between most current autofocus techniques is that the range-Doppler image domain is
required to either suppress unwanted interference (e.g. PGA, min-variance) or to measure the
quality of the image focus (e.g. ICBA, entropy). Thus, both parametric and non-parametric
techniques iterate between the image domain and data domain making them computationally intensive
for larger data sets.
Parametric techniques: These are again of two types i.e. a. Image Contrast based autofocus, b.
Minimum Entropy based autofocus
Non-Parametric Techniques: These are also of two types i.e. a. Prominent Point Processing
autofocus, b. Phase Gradient autofocus
The image contrast-based autofocus (ICBA) aims to form well-focused ISAR images by maximizing
the image contrast (IC), which is an indicator of the image quality.
This characteristic makes such an algorithm different from other techniques such as those described
earlier is:
1. Parametric nature of the ICBA: the radial motion of a target’s point is described by a parametric
function (typically a Taylor polynomial)
2. Radial motion compensation is accomplished in one step, therefore avoiding the range alignment
step.
We discussed entropy minimization for range alignment and phase adjustment. As introduced earlier,
autofocus is a data-driven algorithm that automatically adjusts focusing parameters and correct phase
errors based on data. Entropy minimization is one of the data-driven algorithms for ISAR autofocus.
Due to rotational phase errors and residual translation phase errors, ISAR images can be defocused. In
many practical cases, dominant scatterers may not be well isolated; thus, it is difficult to precisely
defined phase history for these scatterers. Therefore, autofocus techniques based on the assumption of
well-isolated dominant scatterer may not work effectively. In these cases, the autofocus algorithm
based on minimization of the entropy cost function is helpful for ISAR autofocusing.
Phase Correction based on 2-D Entropy minimization
Entropy function can be used to indicate the quality of image focusing. 2-D entropy cost function for
ISAR images was given in. Because the phase function in an ISAR image controls the focus of the
image, we could use the entropy minimization method to correct the phase function.
To effectively search for the optimal phase function, we should select a suitable model that represents
the phase function (e.g., a polynomial function), searching parameters (no more than two), and the
limit of searching range. In a simplified model of two piece-wise linear phase function was used for
searching parameters, where two parameters were used. For stepped-frequency signal waveform in a
stepped-frequency continuous wave-form (SFCW) radar, one parameter can be the index of the burst
number and the other is the slope-angle of the piece-wise phase function.
In the multiple PPP, the first prominent point is usually selected for removing translational motion and
adjusting the phase of the received signals so as to form a new image center. Then, a second prominent
point, which is for correcting the phase error induced by non-uniform rotations, must be selected. If
necessary, we can also select a third prominent point for estimating the rotation rate and the azimuth
scale factor of the resulting image to achieve complete focusing.
In ISAR, after course motion compensation, the image may still be smeared due to phase errors induced
by the target rotation and residual translation errors. To focus the image, we need to identify a
prominent point at the rotation center of the target and track its phase variation. Then, an appropriate
approach for searching an optimal phase function must be used. Finally, by applying the conjugate of
the estimated optimal phase function, the image can be focused on the rotation center.
A simple approach to measure the optimal phase function is the exhaustive searching. we use the
Taylor polynomial to approximate the phase function at the range cell that the rotation center falls in.
Thus, an exhausted search process can be used to find a set of coefficients for constructing the optimal
phase function. However, to simplify the search process, we search only two coefficients instead of
four.
The PGA, proposed in, has been widely used in SAR autofocus. It was developed to make a robust
estimation of the gradient of the phase error in defocused SAR image data. If a complex target has no
stable prominent scatter point, the phase gradient (i.e., phase difference from pulse to pulse) can be
estimated by measuring the pulse-to-pulse phase difference at each range cell and averaging them.
Finally, the phase correction can be made iteratively.
The iterative PGA allows robust and nonparametric estimation and exploits the redundancy of the
phase-error information contained in a degraded SAR image. Because the performance of the PGA is
independent of the content in a SAR scene, there is no need to require isolated point-like reflections in
the SAR scene like the PPP algorithm required.
For the operational inverse synthetic aperture radar (ISAR) situation, the target’s relative movement
with respect to the radar sensor provides the angular diversity required for range-Doppler ISAR
imagery as given in Chapter 2. For the ground-based ISAR systems, for example, collecting back
scattered energy from an aerial target that is moving with a constant velocity for a sufficiently long
period of time can provide the necessary angular extend to form a successful ISAR image. On the other
hand, real targets such as planes, ships, helicopters, and tanks do have usually complicated motion
components while maneuvering. These may include translational and rotational(yaw, roll, and pitch)
motion parameters such as velocity, acceleration, and jerk. Moreover, all these parameters are
unknown to the radar engineer, which adds further complexities to the problem. Therefore, trying to
estimate these motion parameters and also trying to invert the undesired effects of motion on the ISAR
image is often called motion compensation (MOCOMP).Since the motion parameters are unknown to
the radar sensor, the MOCOMP process can be regarded as a blind process and is also assumed to be
one of the most challenging tasks in ISAR imaging research. The MOCOMP procedure has to be
employed in all SAR and ISAR applications to obtain a clear and focused image of the scene or the
target. In SAR applications, for example, the information gathered from the radar platform’s inertial
measurement system, gyro, and/or global positioning system (GPS) is generally used to correct the
motion effects on the phase of the received signal. However, the situation is very different in ISAR
applications such that all motion parameters, including velocity, acceleration, jerk, and the type of
maneuver (straight motion, yaw, roll, and pitch), are not known by the radar. Therefore, these
parameters must somehow be estimated and then eliminated to have a successful ISAR image of the
target. If an efficient compensation routine is not applied, the resultant ISAR image is defocused and
blurred in slant range and cross-range dimensions. Various methods from many researchers have been
suggested to mitigate or eliminate these unwanted motion effects in ISAR imaging. The single
scattering referencing algorithm, the multiple-scatterer method, the centroid tracking algorithm, the
entropy minimization method, the phase gradient auto focusing technique, the cross-correlation
method, and the joint time-frequency (JTF) methods are popular ones among numerous ISAR
MOCOMP techniques. The Various MOCOMP techniques are:
1. Cross-Correlation Method
2. JTF autofocus algorithm
3. Minimum Entropy based autofocus algorithm
The cross-correlation method is one of the basic and most applied range tracking algorithms. The
presented algorithm here relies on the stepped frequency continuous wave (SFCW) radar
configuration. Let us assume that radar system sends out the stepped frequency waveform of M bursts
each having N pulses toward the target. The target’s translational velocity, vt, is assumed to be
constant.
Therefore, radar collects the two-dimensional (2D) backscattered electric field data, Es [m, n], of size
M ∙ N. Then, the phase of the mth burst and the nth pulse can be written in terms of vt as
……..(5.1)
Where, fn is the stepped frequency value for the nth transmitted pulse, Ro is the initial radial location
of the target from radar, and TPRI is the time lag between adjacent pulses or simply the pulse repetition
interval (PRI). Similarly, the phase of the (m + 1)th burst and nth pulse is equal to
…………..(5.2)
Therefore, the phase difference between the adjacent bursts can be calculated as
……………………….(5.3)
where ΔRburst−to−burst = vt ∙ (TPRI ∙ N) is the so-called “range walk” between the adjacent bursts. This
range shift can be compensated by applying the following steps
1. First, one-dimensional (1D) fast Fourier transform (FFT) is applied along the pulses such that a total
of M range profile vector, RPm of length N is obtained.
2. One of the range profiles is taken as the reference. In practice, the first one, RP 1, is usually chosen
due to the fact that its phase is usually either in advance or in lag when compared to phases of all the
others.
3. The cross-correlations of the magnitudes of other (M − 1) range profiles to that of the reference
range profile are calculated via computing the following cross-correlation factor:
……………….(5.4)
4. The locations of the peak value for the calculated cross correlations indicate the range shifts (or time
delays) that are required to align each RP with respect to the reference range profile of RPref
……………………(5.5)
5. The resultant index vector is usually smoothed by fitting to a lower order polynomial so that the
gradual change between the index vector K is almost constant:
……………………….(5.6)
6. Therefore, the range walk between the nth range profile, RPm, and the reference range profile, RPref,
can then be approximated as
……………………(5.7)
……………(5.8)
………….(5.9)
Then, the motion compensated range profile can be obtained by using this correcting phase as
………………………(5.10)
Once all M range profiles are corrected, a motion-compensated ISAR image can then be generated
using conventional ISAR imaging routines.
Example for the Cross-Correlation Method:We will demonstrate the concept of range tracking
by applying the cross-correlation method over a numerical example. A hypothetical fighter, shown in
Figure 4.4, composed of perfect point scatterers of equal magnitudes is chosen for this example. The
target, at an initial radial distance of Ro = 16 km, is moving toward the radar with a radial velocity of
vt = 70 m/s. The target has a radial acceleration value of at = 0.1 m/s2. We also assume that target is
51 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
rotating slowly with an angular velocity of φr = 0.03 rad/s. The radar sends 128 bursts, each having
128 modulated pulses. The frequency of the first pulse is fo = 10 GHz and the total frequency
bandwidth is B = 128 MHz. Pulse repetition frequency (PRF) of the radar system is chosen as 20 KHz.
First, we obtained the conventional range-Doppler ISAR image of the fighter by employing traditional
ISAR imaging procedures without applying any compensation for the motion of the target. The
resultant raw range Doppler ISAR is depicted in Figure 5.3. As can be clearly seen from the figure,
the effect of target’s motion is severe in the resultant ISAR image such that the image is broadly blurred
in the range and Doppler domains, and the true locations of the target’s scattering centers cannot be
retrieved from the image. Next, the cross-correlation method is applied to track the range and
compensate for the motion of the target. First, the range profiles of the target are obtained by applying
1D inverse Fourier transform (IFT) operation to the backscattered electric field along the frequencies.
After taking the first range profile, PR1, as the reference, the cross-correlation between the reference
range profile and the others are calculated by using the formula in Equation 5.4. As explained in the
algorithm, the index for the maximum value of the correlations indicates the time shift required to
align the range profiles. After finding these indices and multiplying them with the calculated range
resolution value of Δr = c/(2B) (1.17 m for this example), we get the range shifts of the range profiles
with respect to PR1. These range walks and their smoothed versions with respect to the range profile
index are plotted in Figure 5.4 as solid and dashed lines, respectively. While smoothing the range
profile shifts to a lower order polynomial (a line for this example), Robust Lowess method is utilized
[29]. Furthermore, the difference between the consecutive range walks is plotted in Figure 5.1a where
these differences are almost constant. Then, the target’s radial translational speed can be found by
dividing these range walk differences by the time differences between each burst. This calculated speed
versus range profile index is plotted in Figure 5.1b. From this figure, we see that the speed is almost
constant, around 70m/s. If we take the average of this speed vector, we get an estimated average value
of vtest = 70 81. m/s for the target’s radial translational speed, which is very close to the actual speed
of vt = 70 m/s. At the last step of the algorithm, the phase contribution caused by the target’s motion
can be compensated for by multiplying the scattered field data with the below phase term as
…(5.11)
Fig 5.5 Range profiles shift and there Fig. 5.6. Range Difference b. radical velocity
smooth end version vs. range profile index with respect range profile index
Once the phase of the collected scattered field is compensated, the ISAR image can then be obtained
by applying the regular ISAR imaging procedures. Figure 5.6 shows the resultant motion-compensated
ISAR image after applying the whole process explained above. The dominant motion effects of
translational velocity vt = 70 m/s are successfully eliminated, and the image of the fictitious fighter is
almost perfectly focused. We also notice that radial translation acceleration of at = 0.1 m/s2 and angular
speed φr = 0.03 rad/s have little effect on the resultant motion compensated image as observed from
Figure 5.4. When these values are taken to be sufficiently greater, however, the image
distortion/blurring is unavoidable if only the range tracking procedure is used as the compensation
tool.
…………….(5.12)
Here, Ro is the initial radial distance of the target from the radar. The sign of vt can be either plus or
minus for an approaching or retreating target, respectively. Similarly, the sign of at can be either plus
or minus for an accelerating or decelerating target, respectively. The first phase, −4πfRo/c, is constant
for all time values and therefore can be suppressed for imaging purposes. With this convention, the
effect of motion can then be compensated if the scattered electric field is multiplied by the following
compensating phase term:
………………(5.13)
Therefore, the goal of the algorithm is to estimate the motion parameters of vt and at to be able to
successfully remove their effects from the phase of the received signal. If the ISAR image matrix is I
and has M columns and N rows, then the Shannon entropy, E, is defined as
…………………(5.14)
Where,
………………(5.15)
Here I′ is the normalized version of the ISAR image. The normalization is accomplished by dividing
the image pixels by the total energy in the image. Once the entropy is defined for the ISAR image
itself, the goal is to find the corresponding compensation vector (so the motion parameters) such that
the new ISAR image has the minimum entropy (or the disorder). The process of searching for the
correct values of motion parameters can be done iteratively as will be demonstrated with a numerical
example
Example for the Minimum Entropy Method In this example, we will demonstrate the use of
the minimum entropy method for compensating the motion effects in an ISAR image. First, we use a
target composed of discrete perfect point scatterers that have equal scattering amplitudes as shown in
demonstrate the use of the minimum entropy method for compensating the motion effects in an ISAR
image. First, we use a target composed of discrete perfect point scatterers that have equal scattering
amplitudes as shown in
blurred due to both the translational and the rotational motion of the target. In Figure, the spectrogram
of received time pulses is plotted. This figure clearly demonstrates the progressive shift in the
frequency (or in the phase) of the consecutive received time pulses. This shift occurs due to the change
of target’s range distance from the radar during the integration time of the ISAR process. If a successful
MOCOMP practice is applied, there is expected to be no (or minimal) range shift between consecutive
time pulses. Then, the minimum entropy methodology is applied to the ISAR image data in Figure
5.10. The algorithm iteratively searches for the values of vt and at by minimizing the entropy of the
compensated ISAR image of
………………….(5.16)
Fig 5.12 Entropy plot for translational radial (a)A fighter plane (b)A single point
Fig.5.13 ISAR image of a fighter plane after applying min. entropy compensation
The effect of motion in the scattered field can then be mitigated by multiplying it with the
compensating phase term as given in Equation 5.16. Consequently, the motion-compensated ISAR
image is obtained as shown in Figure 5.12 by applying the regular FFT-based ISAR imaging
technique. The compensated ISAR image clearly demonstrates that the unwanted effects due to
target’s motion are eliminated after applying the minimum entropy methodology. The target’s
scattering centers are very well displayed with good
resolution. A further check is done by looking at the spectrogram of the motion compensated received
time pulses as illustrated in Figure 5.13. As is obvious from this spectrogram, the range delays between
the time pulses are eliminated such that all frequency (or the phase) values of the returned pulses are
aligned successfully.
Fig 5.16 Entropy plot for translational radial Fig.5.17 ISAR image of a single point
a single point after applying min entropy compensation
Received Signal from a Moving Target: In real-world scenarios, the target’s maneuver can be so
complex that Doppler frequency shifts in the received signal may vary with time. If the target has
complex motion such as yawing, pitching, rolling, or more generally maneuvering, regular Fourier-
based MOCOMP techniques may not be sufficient to model the behavior of the motion. Therefore, the
use of JTF tools may provide insights in understanding and characterizing the Doppler frequency
variations such that translational and rotational motion parameters such as velocity, acceleration, and
jerk can be estimated with good fidelity. Let us assume that the target has a complex maneuver that
can be written as a linear combination of both the translational and rotational motion components. If
R(t) is the target’s translational range distance from the radar, and Ø(t) is the rotational angle of the
target with respect to RLOS axis as illustrated in Figure 8.1, expanding R(t) and Ø(t) into Taylor series.
We first assume that the target is modeled on point scatterers, such that there exists a total number of
K point-scatterers on the target. The time-domain backscattered signal at the radar receiver can then
be represented as the following sum from each scattering centers on the target as
……………(5.17)
Here, Ak(xk, yk) is the backscattered field amplitude from the kth point scatterer. When the range
profiles are concerned only, the time-domain backscattered signal at a selected range cell, x, can be
written in a similar manner as follows:
………………(5.18)
Here, x-axis corresponds to the range direction, and t is the coherent processing interval that can also
be regarded as the pulse number. Substituting R(t) and Ø(t) as listed in Doppler effect into Equation
5.18 and displaying only the leading phase terms, one can get
…………………(5.19)
The first term in the phase is constant and can be ignored for the imaging process. To have a motion-
free range-Doppler image of the target, R(t) should be fixed at Ro, and Ø(t) should linearly vary with
time as Ø(t) = ωrt. If these ideal conditions are met, the Fourier transform operation will successfully
focus the cross-range points (i.e., yks) onto their correct locations. Therefore, the MOCOMP procedure
should be applied to other phase terms starting from the second order in aiming to suppress them in
the phase of the received signal.
An Algorithm for JTF-Based Rotational MOCOMP One effective solution, is to apply JTF
tools to extract the instantaneous Doppler frequency information of the time-varying range-Doppler
data such that time snapshots of the time-varying ISAR image can be constructed. The JTF-based
schematic algorithm that can take the time-snapshot ISAR images of a rotating target. The
methodology can be separated into the following steps:
1. Pulsed radar (either linear frequency modulated [LFM] or SFCW) collects the scattered field from
the target for the coherent integration time. After the received signal is digitized, let us suppose that
we have a matrix size of M · N. For the SFCW radar operation, the matrix is obtained from M bursts,
each having N pulses.
2. In the second step, 1D IFT operation is applied among bursts to get the 1D range profiles for N
pulses.
3. Then, multiple JTF transforms are employed to the pulses for every range cell value. Each JTF
transformation operation at the single range cell yields a time-Doppler matrix that has a dimension of
M · P. If the target’s rotational velocity ωr is known, the Doppler axis can be readily replaced with a
cross-range axis by using the following relationship
…………….(5.20)
where y is the cross-range variable, fD is the instantaneous Doppler frequency shift, and λc is the
wavelength of center frequency.
4. After the JTF transformation operations are employed to all of the range cells, a three-dimensional
(3D) time-range-Doppler (or time-range-cross-range) cube of size M ∙ N ∙ P is constructed. This cube
has the property of providing the range-Doppler (or range-cross-range) image at a selected time instant.
5. As the final step, a total of N range-Doppler (or range-cross-range) ISAR images of the target can
be obtained by taking different slices of the time-range-Doppler (or time-range-cross-range) cube. The
resultant 2D ISAR images correspond to the time snapshots of the target while rotating.
Example for JTF-Based Rotational MOCOMP An example for the above algorithm is
demonstrated for the airplane model in Figure 5.14. The model consists of ideal point scatterers that
imitate a fighter aircraft. The simulation of the backscattered electric field is collected for a scenario
such that the airplane is 16 km away from the radar and moving in the direction that makes a 30° angle
with the radar’s line-of-sight axis. The target has a translation speed of vt = 1 m/s while rotating with
an angular speed of ωt = 0.24 rad/s.
A total of 128 pulses in each of 512 bursts are selected for the SFCW radar simulation of backscattered
electric field. The center frequency and the frequency bandwidth are selected as fc = 3.256 GHz and
B = 512 MHz, respectively. The corresponding pulse duration is then
……………(5.21)
The PRF is chosen as 20 KHz. So, the PRI or the time between the two consecutive bursts is then
…………(5.22)
First, the traditional ISAR image is obtained via applying 2D IFT to the backscattered field. The
resulting image is plotted in Figure 5.14, where the image suffers from blurring effect due to fast
rotation rate of the target. Since the target’s angular and translational location are different for the first
and the last pulse of the radar, the maneuvering effect can also be seen in the ISAR image as the
aircraft’s image is smeared. This is pretty analogous to any optical imaging system: When the object
point is moving fast, it occupies several pixels in the image during the time the lens stays open.
Therefore, the resulting picture of the fast-moving object becomes blurred.
As suggested in the above algorithm, the time-dependent ISAR images of the target can be formed,
thanks to the JTF-based ISAR imaging process that can take time snapshots of the scene. Therefore,
we applied the above methodology to the backscattered field data from the target while it was moving.
During the implementation of the algorithm, a Gabor-wavelet transform that uses a Gaussian blur
function is employed as the JTF tool. At the end of the algorithm steps, a corresponding 3D time-
range-cross-range cube is obtained for the simulated moving target. time snapshots of the 3D time-
range-cross-range cube are plotted for the selected nine different time instants. Each subfigure
corresponds to a particular ISAR image for a particular time instant. As time progresses, the movement
of the fighter’s radar image is clearly observed by looking from the first ISAR image to the last one.
A Numerical Example
We will now demonstrate an example that simulates the above algorithm. The target is assumed to
consist of a set of perfect point scatterers that have equal scattering amplitudes as shown in Figure 5.9.
The target’s initial location is Ro = 1.3 km away from the radar and
Fig 5.19 Conventional ISAR image of aeroplane target with translational and rotational motion
moving with a radial translational velocity of vt = 35 m/s and with a radial translational acceleration
of at = −1.9 m/s2 . The target’s angular velocity of the target is φr = 0.15 rad/s (8.5944°/s). The radar’s
starting frequency is fo = 3 GHz and the total bandwidth is B = 384 MHz. The radar transmitter sends
out 128 modulated pulses in each of 512 bursts. PRF is chosen to be 20 KHz. Without applying any
compensation routine, the regular range-Doppler ISAR image of the target is formed using the
traditional imaging methodology. The corresponding range-Doppler ISAR image is shown in Figure
5.16. As clearly seen from the figure, the image is highly distorted, defocused, and blurred due to high
velocity values in both the translational and the rotational directions. To observe the frequency (or
phase) shifts among the received time pulses, the spectrogram for consecutive time pulses is shown in
Figure 5.16. Because of the translational acceleration, the nonlinearity of the frequency shifts is also
observed. After compensating for the errors associated with target’s motion, these shifts are expected
to be minimal. The translational motion parameters together with target’s initial distance Ro were
estimated using an MP-type search routine. After this exhaustive iterative search, the correct values of
Roest = 1 3. km,est = 35 m/s, and atest = −1 9 2 . m/s were successfully found.
Fig.5.21 ISAR image of the aeroplane target after translational motion compensation
in the translational radial velocity and translational radial acceleration. As seen from the figure, the
argument in MP search makes a maximum when vt est becomes equal to 35 m/s and at est equals to
−1.9 m/s2 . After the translational motion parameters of the target were predicted, the translational
MOCOMP was finalized by employing the formula. Then, the ISAR image corresponding to the
modified electric field is plotted in Figure 5.19. This figure clearly demonstrates the success of the
translational MOCOMP such that only the rotational motion-based defocusing is noted in the ISAR
image. The spectrogram of the time pulses in the modified received signal is also plotted in Figure to
investigate the frequency shifts between the consecutive time pulses. As seen from this spectrogram,
although severe frequency shifts mainly due to the target translational velocity were mitigated, there
still exists some fluctuation in the phase of the modified received signal due to rotational motion errors.
In the second part of the algorithm, the errors associated with target’s rotational motion are
compensated. For this goal, a Gabor-wavelet transform that uses a Gaussian blur function is employed
as the JTF tool to compensate the rotational motion effects in the ISAR image; the resultant image is
given in Figure where all the phase errors due to the translational and the rotational motion of the target
were eliminated. The resultant motion-free ISAR image is very well focused and the scattering centers
around the target are well localized. The final check is also performed by looking at the spectrogram
of the compensated received signal as plotted in Figure where the frequencies of time pulses are aligned
Fig 5.25 A hypothetical target composed of Fig 5.26 Conventional ISAR image of a single
a single point point target with translational and rotational motion
Fig.5.27 ISAR image of the a single point target after Fig5.28 Spectrogram of time pulses(non
translational motion compensation compensated) of a single point
ISAR image quality measurements play an important role in the development of ISAR image
digital processing methods. The main objective of ISAR image quality measurements is to examine
the performance of a ISAR system. ISAR image quality measurements are based on analysis of a point
target. Some important ISAR image quality parameters are-
1. Peak signal lobe ratio
2. Integrated side lobe ratio
3. Image Contrast
4 .Image Entropy
The term peak signal-to-noise ratio (PSNR) is an expression for the ratio between the maximum
possible value (power) of a signal and the power of distorting noise that affects the quality of its
representation. Because many signals have a very wide dynamic range, (ratio between the largest
and smallest possible values of a changeable quantity) the PSNR is usually expressed in terms of the
logarithmic decibel scale.
Image enhancement or improving the visual quality of a digital image can be subjective. Saying that
one method provides a better quality image could vary from person to person. For this reason, it is
necessary to establish quantitative/empirical measures to compare the effects of image enhancement
algorithms on image quality.
Using the same set of tests images, different image enhancement algorithms can be compared
systematically to identify whether a particular algorithm produces better results. The metric under
investigation is the peak-signal-to-noise ratio. If we can show that an algorithm or set of algorithms
can enhance a degraded known image to more closely resemble the original, then we can more
accurately conclude that it is a better algorithm.
For the following implementation, let us assume we are dealing with a standard 2D array of data or
matrix. The dimensions of the correct image matrix and the dimensions of the degraded image matrix
must be identical.
The mathematical representation of the PSNR is as follows:
The mean squared error (MSE) for our practical purposes allows us to compare the “true” pixel values
of our original image to our degraded image. The MSE represents the average of the squares of the
"errors" between our actual image and our noisy image. The error is the amount by which the values
of the original image differ from the degraded image.
The proposal is that the higher the PSNR, the better degraded image has been reconstructed to match
the original image and the better the reconstructive algorithm. This would occur because we wish to
minimize the MSE between images with respect the maximum signal value of the image.
where h(τ) stands for the IRF in azimuth or range direction and [a,b] stands for the range of main lobe
at 3dB below the maximum intensity peak.
There are several different ways for calculating the ISLR proposed by many authors in the literature,
with differences in the adoption of the areas in which the energy is integrated:
Sanchez defined the ISLR as the ratio of the energy inside a rectangle centred on the maximum of the
main lobe and side length equal to the −3dB width of the IRF to the rest of the energy of the IRF. In
this definition, only one resolution cell is considered to have the energy in the main lobe.
Franceschetti et al. defined the normalized integrated side lobe ratio, NISLR, as follows:
Guignard proposed that there are different regions in the IRF: the main beam area, which is 3 by 3
pixels centred on the maximum; the guard band, which is formed by the 26 pixels surrounding the
main beam area; and the side lobe area, formed by a square of 99 pixels side, disregarding the inner
5 by 5 window.
Holm et al established the ISLR is the ratio of the power within a square centred on the maximum
and twenty by twenty resolution cells, without considering an inner window of three resolution cells
side and the power in the second window.
The European Space Agency (ESA)Established the ISLR as the ratio of the power within a square
centred on the maximum and ten resolution cells side, without considering an inner window of two
resolution cells side and the power in the second window.
Martinez and Marchand [1] proposed to work with the normalized form of the ESA’s definition:
Image entropy can be calculated with the formula used by the Galileo Imaging Team:
In the above expression, P i is the probability that the difference between 2 adjacent pixels is equal to i, and
Log 2 is the base 2 logarithm.
If the ISAR image matrix is I and has M columns and N rows, then the Shannon entropy E, is defined as
where,
(i) using a hypothetical airplane target(for obtaining PSNR, Image , Contrast, Image Entropy)
clear all
close all
clc
pulses = 128;
burst = 128;
c = 3.0e8;
f0 = 10e9;
bw = 128e6;
T1 = (pulses-1)/bw;
PRF = 20e3;
T2 = 1/PRF;
dr = c/(2*bw);
W = 0.03;
Vr = 70.0;
ar = 0.1;
R0 = 16e3;
theta0 = 0;
load Fighter3
h = plot(-Xc,Yc,'o', 'MarkerSize',8,'MarkerFaceColor',[0,0,1]);grid;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight','Bold');
axis([-35 35 -30 30])
xlabel('X [m]'); ylabel('Y [m]');
%Scattering centers in cylindirical coordinates
[theta,r]=cart2pol(Xc,Yc);
theta=theta+theta0*0.017455329;
i = 1:pulses*burst;
T = T1/2+2*R0/c+(i-1)*T2;
Rvr = Vr*T+(0.5*ar)*(T.^2);
Tetw = W*T;
i = 1:pulses;
df = (i-1)*1/T1;
k = (4*pi*(f0+df))/c;
k_fac=ones(burst,1)*k;
%Calculate back-scattered E-field
Es(burst,pulses) = 0.0;
for scat = 1:1:length(Xc)
73 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
Vr_Dif = (-RangeDif/T_burst);
Vr_av = (RangeDifAv /T_burst);
%---Figure 8.4---------------------------------------------–
h = figure;
plot(i,RangeShifts,'LineWidth',1);hold
plot(i,SmRangeShifts,'-.k.','MarkerSize',2);hold
axis tight
legend('RP shifts','Smoothed RP shifts');
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('range profile index');
%figure8.5
h = figure;
subplot(211);
plot(RangeDif,'LineWidth',2);
axis([1 burst -.75 -.25 ])
set(gca,'FontName', 'Arial', 'FontSize',10,'FontWeight','Bold');
xlabel('Range profile index');
%------------------------------------------------–
win = hanning(pulses)*hanning(burst).';
ISAR = abs((fft2((Es_comp.*win))));
ISAR2 = ISAR(:,28:128);
ISAR2(:,102:128)=ISAR(:,1:27);
h = figure;
imagesc(X,Y,ISAR2);
colormap;colorbar;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight','Bold' ) ;
xlabel('Range [m]'); ylabel('Doppler index');
title('Motion compansated ISAR image')
maxValue = max(max(ISAR));
%subImage = imcrop(ISAR2, [25, 128, 20, 130]);
ISARcrop=ISAR(40:60,100:120);
meanvalue = mean(mean(abs(ISARcrop)));
snrv = abs(maxValue/meanvalue);
snrvdb=abs(10*log(snrv))
im=ISAR2/sum(sum(ISAR2));
IE = -sum(sum(im.*log2(im)))
image_contrast=max(ISAR2(:))-min(ISAR2(:));
IC= 10*log(abs(image_contrast))
clear all
close all
clc
pulses = 128;
burst = 128;
c = 3.0e8;
f0 = 10e9;
bw = 128e6;
T1 = (pulses-1)/bw;
76 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
PRF = 20e3;
T2 = 1/PRF;
dr = c/(2*bw);
W = 0.03;
Vr = 70.0;
ar = 0.1;
R0 = 16e3;
theta0 = 0;
load f
h = plot(-Xc,Yc,'o', 'MarkerSize',8,'MarkerFaceColor',[0,0,1]);grid;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight','Bold');
axis([-35 35 -30 30]);
xlabel('X [m]'); ylabel('Y [m]');
%Scattering centers in cylindirical coordinates
[theta,r]=cart2pol(Xc,Yc);
theta=theta+theta0*0.017455329;
i = 1:pulses*burst;
T = T1/2+2*R0/c+(i-1)*T2;
Rvr = Vr*T+(0.5*ar)*(T.^2);
Tetw = W*T;
i = 1:pulses;
df = (i-1)*1/T1;
k = (4*pi*(f0+df))/c;
k_fac=ones(burst,1)*k;
%Calculate back-scattered E-field
Es(burst,pulses) = 0.0;
for scat = 1:1:length(Xc)
arg = (Tetw - theta(scat) );
rngterm = R0 + Rvr - r(scat)*sin(arg);
range = reshape(rngterm,pulses,burst);
range = range.';
phase = k_fac.* range;
Ess = exp(1i*phase);
Es = Es+Ess;
end
Es = Es.';
%------------------------------------------------–
%Form ISAR Image (no compansation)
X = -dr*((pulses)/2-1):dr:dr*pulses/2;Y=X/2;
ISAR = abs(fftshift(fft2((Es))));
h = figure;
imagesc(X,Y,ISAR);
colormap; colorbar;
set(gca , 'FontName' , 'Arial' , 'FontSize' , 12 , 'FontWeight' , ' Bold ');
xlabel('Range [m]');
ylabel('Doppler index');
%–Cross-Correlation Algorithm Starts here-------------------
RP=(ifft(Es)).';
for l=1:burst
cr(l,:) = abs(ifft(fft(abs(RP(1,:))).* fft(abs(conj(RP(l,:))))));
pk(l) = find((max(cr(l,:))== cr(l,:)));
end
Spk = smooth((0:pulses-1),pk,0.8,'rlowess');
RangeShifts = dr*pk;
SmRangeShifts = dr*Spk;
RangeDif = SmRangeShifts(2:pulses)-SmRangeShifts(1:pulses-1);
RangeDifAv = mean(RangeDif);
T_burst = T(pulses+1)-T(1);
Vr_Dif = (-RangeDif/T_burst);
Vr_av = (RangeDifAv /T_burst);
%------------------------------------------------–
h = figure;
plot(i,RangeShifts,'LineWidth',2);hold
plot(i,SmRangeShifts,'-.k.','MarkerSize',4);hold
legend('RP shifts','Smoothed RP shifts');
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('range profile index');
h = figure;
subplot(211);
plot(RangeDif,'LineWidth',2);
axis([1 burst -.75 -.25 ]);
set(gca,'FontName', 'Arial', 'FontSize',10,'FontWeight','Bold');
xlabel('Range profile index');
TL=sum(ISAR2(65,:))+sum(ISAR2(:,65))
ISLR=(TL-2*V)/V
ISLRDB=10*log(ISLR)
clear all
close all
clc
pulses = 128;
burst = 128;
c = 3.0e8;
f0 = 10e9;
bw = 128e6;
T1 = (pulses-1)/bw;
PRF = 20e3;
T2 = 1/PRF;
dr = c/(2*bw);
W = 0.03;
Vr = 70.0;
ar = 0.1;
R0 = 16e3;
theta0 = 0;
load fighter3
h = plot(Xc,Yc,'o', 'MarkerSize',8,'MarkerFaceColor',[1 0 0]);
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
%-------------------------------------------------
%Form ISAR Image (no compansation)
X = -dr*((pulses)/2-1):dr:dr*pulses/2;Y=X/2;
ISAR = abs(fftshift(fft2((Es))));
h = figure;
81 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
imagesc(X,Y,ISAR);
colormap; colorbar;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('Range [m]'); ylabel('Doppler index');
%-------------------------------------------------
% JTF Representation of range cell
EsMp = reshape(Es,1,pulses*burst);
S = spectrogram(EsMp,128,64,120);
[a,b] = size(S);
h = figure;
imagesc(X,Y,abs(S));
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
colormap;
xlabel('time pulses');
ylabel('frequency index');
title('Spectrogram');
n = 0;
for iv = A;
n = n+1;
VI(syc,1:2) = [Vest,iv];
S = exp((j*4*pi*F/c).*(Vest*T+(0.5*iv)*(T.^2)));
Scheck = Es.*S;
ISAR = abs(fftshift(fft2((Scheck))));
SumU = sum(sum(ISAR));
I = (ISAR/SumU);
Emat = I.*log10(I);
EI(m,n) = -(sum(sum(Emat)));
syc = syc+1;
end
end
[dummy,mm] = min(min(EI.'));
[dummy,nn] = min(min(EI));
%---Figure 8_10 ---------------------------------------------
h =surfc(A,V,EI);
colormap;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
ylabel('Translational velocity [m/s]');
xlabel('Translational acceleration [m/s^2]');
zlabel ('Entropy value');
%------------------------------------------------
% ISAR after compensation
83 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
h = figure;
ISAR2 = abs(fftshift(fft2(S_Duz),3));
imagesc(X,Y,ISAR2);
colormap;
colorbar;%grid;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('Range [m]');
ylabel('Doppler index');
%------------------------------------------------
% Check the compensation using via JTF Representation of range cells
EsMp = reshape(S_Duz,1,pulses*burst);
S = spectrogram(EsMp,128,64,120);
[a,b] = size(S);
h = figure;
imagesc(X,Y,abs(S));
colormap;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('time pulses');
ylabel('frequency index');
title('Spectrogram');
maxValue = max(max(ISAR2));
%subImage = imcrop(ISAR2, [25, 128, 20, 130]);
ISARcrop=ISAR2(20:52,16:32);
meanvalue = mean(mean(abs(ISARcrop)));
snrv = abs(maxValue/meanvalue);
snrvdb=abs(10*log(snrv))
im=ISAR2/sum(sum(ISAR2));
IE = -sum(sum(im.*log2(im)))
image_contrast=max(ISAR2(:))-min(ISAR2(:));
84 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
IC= 10*log(abs(image_contrast))
pulses = 128;
burst = 128;
c = 3.0e8;
f0 = 10e9;
bw = 128e6;
T1 = (pulses-1)/bw;
PRF = 20e3;
T2 = 1/PRF;
dr = c/(2*bw);
W = 0.03;
Vr = 70.0;
ar = 0.1;
R0 = 16e3;
theta0 = 0;
load f
[theta,r]=cart2pol(Xc,Yc);
theta=theta+theta0*0.017455329;
i = 1:pulses*burst;
T = T1/2+2*R0/c+(i-1)*T2;
Rvr = Vr*T+(0.5*ar)*(T.^2);
Tetw = W*T;
i = 1:pulses;
df = (i-1)*1/T1;
k = (4*pi*(f0+df))/c;
k_fac=ones(burst,1)*k;
%-------------------------------------------------
%Form ISAR Image (no compansation)
X = -dr*((pulses)/2-1):dr:dr*pulses/2;Y=X/2;
ISAR = abs(fftshift(fft2((Es))));
86 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
h = figure;
imagesc(X,Y,ISAR);
colormap; colorbar;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('Range [m]'); ylabel('Doppler index');
%------------------------------------------------
% JTF Representation of range cell
EsMp = reshape(Es,1,pulses*burst);
S = spectrogram(EsMp,128,64,120);
[a,b] = size(S);
h = figure;
imagesc(X,Y,abs(S));
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
colormap;
xlabel('time pulses');
ylabel('frequency index');
title('Spectrogram');
m = m+1;
n = 0;
for iv = A;
n = n+1;
VI(syc,1:2) = [Vest,iv];
S = exp((j*4*pi*F/c).*(Vest*T+(0.5*iv)*(T.^2)));
Scheck = Es.*S;
ISAR = abs(fftshift(fft2((Scheck))));
SumU = sum(sum(ISAR));
I = (ISAR/SumU);
Emat = I.*log10(I);
EI(m,n) = -(sum(sum(Emat)));
syc = syc+1;
end
end
%------------------------------------------------
% ISAR after compensation
h = figure;
ISAR2 = abs(fftshift(fft2(S_Duz),2));
imagesc(X,Y,10*log(ISAR2));
colormap;
colorbar;%grid;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('Range [m]');
ylabel('Doppler index');
%------------------------------------------------
% Check the compensation using via JTF Representation of range cells
EsMp = reshape(S_Duz,1,pulses*burst);
S = spectrogram(EsMp,128,64,120);
[a,b] = size(S);
h = figure;
imagesc(X,Y,abs(S));
colormap;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('time pulses');
ylabel('frequency index');
title('Spectrogram');
size(ISAR2)
[V I]=max(max(ISAR2))
[V I]=max(max(ISAR2.'))
TL=abs(sum(ISAR2(63,:))+ sum(ISAR2(:,88)))
ISLR=abs(TL-(2*(V)))/V
ISLRDB=10*log(abs(ISLR))
pulses = 128;
burst = 128;
c = 3.0e8;
f0 = 10e9;
bw = 128e6;
T1 = (pulses-1)/bw;
PRF = 20e3;
T2 = 1/PRF;
dr = c/(2*bw);
W = 0.03;
Vr = 70.0;
ar = 0.1;
R0 = 16e3;
theta0 = 0;
load fighter3
h = plot(-Xc,Yc,'o', 'MarkerSize',8,'MarkerFaceColor',[1 0 0]);
grid on;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
axis([-20 20 -20 20])
xlabel('X [m]'); ylabel('Y [m]');
[theta,r] = cart2pol(Xc,Yc);
theta = theta+theta0*0.017455329;
90 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
i = 1:pulses*burst;
T = T1/2+2*R0/c+(i-1)*T2;
Rvr = Vr*T+(0.5*ar)*(T.^2);
Tetw = W*T;
i = 1:pulses;
df = (i-1)*1/T1;
k = (4*pi*(f0+df))/c;
k_fac = ones(burst,1)*k;
Es(burst,pulses) = 0.0;
for scat = 1:1:length(Xc)
arg = (Tetw - theta(scat) );
rngterm = R0 + Rvr - r(scat)*sin(arg);
range = reshape(rngterm,pulses,burst);
range = range.';
phase = k_fac.* range;
Ess = exp(-1i*phase);
Es = Es+Ess;
end
Es = Es.';
X = -dr*((pulses)/2-1):dr:dr*pulses/2;Y=X/2;
ISAR = abs(fftshift(fft2(Es)));
h = figure;
imagesc(X,Y,ISAR);
colormap;
91 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
colorbar;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('Range [m]'); ylabel('Doppler index');
EsMp = reshape(Es,1,pulses*burst);
S = spectrogram(EsMp,128,64,128);
[a,b] = size(S);
h = figure;
imagesc(X,Y,abs(S));
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
colormap;
xlabel('time pulses');
ylabel('frequency index');
title('Spectrogram');
f = (f0+df);
T = reshape(T,pulses,burst);
F = f.'*ones(1,burst);
syc = 1;
RR = 1e3:1e2:2e3;
V = 50:80;
A = -2.5:.1:1;
m = 0;
clear EI
for Vest = V;
92 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
m = m+1;
n = 0;
for iv = A;
n = n+1;
p = 0;
for Rest = RR;
p = p+1;
VI(syc,1:2) = [Vest,iv];
S = exp((1i*4*pi*F/c).*(Rest+Vest*T+(0.5*iv)*(T.^2)));
Scheck = Es.*S;
SumU = sum(sum(Scheck));
EI(m,n,p) = abs(SumU);
end
end
end
[dummy,pp] = max(max(max((EI))));
[dummy,nn] = max(max((EI(:,:,pp))));
[dummy,mm] = max(EI(:,nn,pp));
figure;
h = surfc(A,V,EI(:,:,pp));
colormap;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
ylabel('Translational velocity [m/s]');
xlabel('Translational acceleration [m/s^2]');
zlabel ('maximum argument');
Sconj = exp((1i*4*pi*F/c).*(V(mm)*T+(0.5*A(nn)*(T.^2))));
S_Duz = Es.*Sconj;
h = figure;
ISAR2 = abs(fftshift(fft2(S_Duz),2));
imagesc(X,Y,(ISAR2));
colormap;
colorbar;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('Range [m]'); ylabel('Doppler index');
Sconjres = reshape(S_Duz,1,pulses*burst);
S = spectrogram(Sconjres,128,64,120);
[a,b] = size(S);
h =figure;
imagesc(X,Y,abs(S));
colormap;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('time pulses'); ylabel('frequency index');
title('Spectrogram');
Ese = S_Duz;
win = hamming(pulses)* hamming(burst).';
Esew = Ese.*win;
Es_IFFT = (ifft(Esew)).';
i = 1:pulses*burst;
94 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
T = T1/2+2*R0/c+(i-1)*T2;
N = 1;
tst = T2*pulses;
t = T(1:pulses:pulses*burst);
fp = 160;
Alpha_p = 0.04;
t_istenen = 100;
tp = ((t_istenen-1)*tst)/2;
parca1 = 1/sqrt(2*pi*(Alpha_p)^2);
parca2 = exp(-((t-tp).^2)/(2*Alpha_p));
for i=1:pulses
parca3 = exp((-1i*2*pi*fp*(t-tp))/N);
GaborWavelet(i,1:burst) = parca1*parca2.*parca3;
fp=fp+1/(pulses*tst);
end;
%------------------------------------------------
EMp = reshape(St_Img,1,128*128);
S = spectrogram(EMp,256,120);
h = figure;
imagesc(X,Y,abs(S.'));
colormap;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('time pulses'); ylabel('frequency index');
title('Spectrogram');
maxValue = max(max(ISAR2))
%subImage = imcrop(ISAR2, [25, 128, 20, 130]);
ISARcrop=ISAR2(96:112,16:32);
meanvalue = mean(mean(abs(ISARcrop)));
snrv = abs(maxValue/meanvalue);
snrvdb=abs(10*log(snrv))
im=ISAR2/sum(sum(ISAR2));
IE = -sum(sum(im.*log2(im)))
image_contrast=max(ISAR2(:))-min(ISAR2(:));
IC= 10*log(abs(image_contrast))
clear all
close all
clc
pulses = 128;
burst = 128;
c = 3.0e8;
f0 = 10e9;
96 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
bw = 128e6;
T1 = (pulses-1)/bw;
PRF = 20e3;
T2 = 1/PRF;
dr = c/(2*bw);
W = 0.03;
Vr = 70.0;
ar = 0.1;
R0 = 16e3;
theta0 = 0;
load f
h = plot(-Xc,Yc,'o', 'MarkerSize',8,'MarkerFaceColor',[1 0 0]);
grid on;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
axis([-20 20 -20 20])
xlabel('X [m]'); ylabel('Y [m]');
[theta,r] = cart2pol(Xc,Yc);
theta = theta+theta0*0.017455329;
i = 1:pulses*burst;
T = T1/2+2*R0/c+(i-1)*T2;
Rvr = Vr*T+(0.5*ar)*(T.^2);
Tetw = W*T;
i = 1:pulses;
df = (i-1)*1/T1;
k = (4*pi*(f0+df))/c;
k_fac = ones(burst,1)*k;
97 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
Es(burst,pulses) = 0.0;
for scat = 1:1:length(Xc)
arg = (Tetw - theta(scat) );
rngterm = R0 + Rvr - r(scat)*sin(arg);
range = reshape(rngterm,pulses,burst);
range = range.';
phase = k_fac.* range;
Ess = exp(-1i*phase);
Es = Es+Ess;
end
Es = Es.';
X = -dr*((pulses)/2-1):dr:dr*pulses/2;Y=X/2;
ISAR = abs(fftshift(fft2(Es)));
h = figure;
imagesc(X,Y,ISAR);
colormap;
colorbar;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('Range [m]'); ylabel('Doppler index');
EsMp = reshape(Es,1,pulses*burst);
S = spectrogram(EsMp,128,64,128);
[a,b] = size(S);
h = figure;
98 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
imagesc(X,Y,abs(S));
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
colormap;
xlabel('time pulses');
ylabel('frequency index');
title('Spectrogram');
f = (f0+df);
T = reshape(T,pulses,burst);
F = f.'*ones(1,burst);
syc = 1;
RR = 1e3:1e2:2e3;
V = 50:80;
A = .1:.1:1;
m = 0;
clear EI
for Vest = V;
m = m+1;
n = 0;
for iv = A;
n = n+1;
p = 0;
for Rest = RR;
p = p+1;
VI(syc,1:2) = [Vest,iv];
S = exp((1i*4*pi*F/c).*(Rest+Vest*T+(0.5*iv)*(T.^2)));
Scheck = Es.*S;
99 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
SumU = sum(sum(Scheck));
EI(m,n,p) = abs(SumU);
end
end
end
[dummy,pp] = max(max(max((EI))));
[dummy,nn] = max(max((EI(:,:,pp))));
[dummy,mm] = max(EI(:,nn,pp));
figure;
h = surfc(A,V,EI(:,:,pp));
colormap;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
ylabel('Translational velocity [m/s]');
xlabel('Translational acceleration [m/s^2]');
zlabel ('maximum argument');
Sconj = exp((1i*4*pi*F/c).*(V(mm)*T+(0.5*A(nn)*(T.^2))));
S_Duz = Es.*Sconj;
h = figure;
ISAR2 = abs(fftshift(fft2(S_Duz)));
imagesc(X,Y,abs(10*log(fftshift(fft2(S_Duz)))));
colormap;
colorbar;
100 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
Sconjres = reshape(S_Duz,1,pulses*burst);
S = spectrogram(Sconjres,128,64,120);
[a,b] = size(S);
h =figure;
imagesc(X,Y,abs(S));
colormap;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('time pulses'); ylabel('frequency index');
title('Spectrogram');
Ese = S_Duz;
win = hamming(pulses)* hamming(burst).';
Esew = Ese.*win;
Es_IFFT = (ifft(Esew)).';
i = 1:pulses*burst;
T = T1/2+2*R0/c+(i-1)*T2;
N = 1;
tst = T2*pulses;
t = T(1:pulses:pulses*burst);
fp = 160;
Alpha_p = 0.04;
t_istenen = 100;
101 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
tp = ((t_istenen-1)*tst)/2;
parca1 = 1/sqrt(2*pi*(Alpha_p)^2);
parca2 = exp(-((t-tp).^2)/(2*Alpha_p));
for i=1:pulses
parca3 = exp((-1i*2*pi*fp*(t-tp))/N);
GaborWavelet(i,1:burst) = parca1*parca2.*parca3;
fp=fp+1/(pulses*tst);
end;
h = figure;
imagesc(X,Y,abs(St_Img.'));
colorbar;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
colormap;
grid;
xlabel('Range [m]');
ylabel('Doppler index');
%------------------------------------------------
EMp = reshape(St_Img,1,128*128);
S = spectrogram(EMp,256,120);
h = figure;
imagesc(X,Y,abs(S.'));
colormap;
set(gca,'FontName', 'Arial', 'FontSize',12,'FontWeight', 'Bold');
xlabel('time pulses'); ylabel('frequency index');
102 | P a g e B. Tech, ECE Dept., CUCEK(CUSAT)
Design, analysis and simulation of ISAR Motion Compensation Algorithms
title('Spectrogram');
size(ISAR2)
[V I]=max(max(ISAR2))
[V I]=max(max(ISAR2.'))
TL=sum(ISAR2(39,:))+ sum(ISAR2(:,65))
ISLR=(TL-2*V)/V
ISLRDB=10*log(abs(ISLR))
CHAPTER 6
For doing comparison between different motion compensation algorithms, we use following image
quality factors - Image entropy, Image contrast, Peak signal to noise ratio(PSNR) and Integrated side
lobe ratio(ISLR).
Image entropy is a quantity which is used to describe the `business' of an image, i.e. the amount of
information which must be coded for by a compression algorithm. The lower the entropy, the image
can be highly compressible. So lower entropy values are desirable. From the comparison table, we get
to know that, the cross correlation method has the lowest entropy and the joint time frequency(JTF)
method has the highest value of entropy.
The contrast of a image is the amount to which different objects in the image can be visually
distinguished from one another. Contrast is how well an image utilizes the range of pixel intensities
available. Higher image contrast is desirable. From the comparison table, we get to know that JTF has
highest image contrast and cross correlation has the lowest image contrast.
The higher the PSNR, the better degraded image has been reconstructed to match the original image
and the better the reconstructive algorithm. Minimum entropy method has highest PSNR and cross
correlation method has lowest PSNR.
Lowest the ISNR value, more the smaller targets can be detected in the presence of a larger target.
Minimum entropy method has the lowest value of ISNR and Cross correlation has the highest value.
Overall, JTF requires more computation power than the other methods because it uses wavelet
transform. Minimum entropy method is better for efficient implementation and it requires less
computation power, but for high resolution image JTF is preferred.
COMPARISON TABLE
Motion Image Entropy Image Contrast PSNR(in db) ISLR(in db)
Compensation
algorithm
Cross correlation 9.0251 90.8536 26.2868 12.8887
method
Minimum entropy 12.3625 96.007 72.9449 6.2164
method
Joint time 12.0282 99.4214 75.56 5.4594
frequency method
REFERENCES
[1] “Inverse SyntheticAperture RadarImaging withMATLAB Algorithms” by CANER
ÖZDEMIR
[2] “Spotlight Synthetic Aperture Radar” by Walter G. Carrara & Ron S. Goodman and Ronald
M. Majewski
[6] ISAR motion compensation via adaptive joint time frequency technique - Yuanxun Wang,
Hao Ling, VC Chen
[7] Translational motion compensation in ISAR image processing - Haiging Wu, D. Grenier G.y
Delisle Ga Gand Fang
[8] Synthetic Aperture Radar Polimetry - Jakob van zyl, Yunjin kin