Expert Systems with Applications 36 (2009) 3866–3879
Contents lists available at ScienceDirect
Expert Systems with Applications
journal homepage: www.elsevier.com/locate/eswa
An evolutionary computing approach for the target motion analysis (TMA)
problem for underwater tracks
Levent Ince a, Bulent Sezen b,*, Erhan Saridogan a, Huseyin Ince c
a
Turkish Navy Research Center, Istanbul, Turkey
Operations Management, Gebze Institute of Technology, Department of Business Administration, Cayirova Fabrikalar Yolu, No. 101, 41400 Gebze, Kocaeli, Turkey
c
Gebze Institute of Technology, Gebze, Turkey
b
a r t i c l e
i n f o
Keywords:
Target motion analysis
Evolutionary computing
Genetic algorithms
a b s t r a c t
This study is concerned with the problem of determination of kinematic parameters of a moving sound
source by using only bearings known as target motion analysis (TMA). A new matched field signal processing approach which uses genetic algorithm (GA) and Monte Carlo simulation is proposed to establish
tracks from bearing-only contacts. The basic idea is to take a number of measurements and run a simulation of underwater tactical situation, then to let the simulation change its parameters until the output
matches with the measurement data. When the simulation output (i.e. the replica data) matches the real
measurement within a predefined degree, it is expected that the simulation resembles the real situation.
In this sense, the TMA problem is considered as an optimization problem on a large parameter space and
genetic algorithm was used to solve it. We developed an application called target motion analysis with
genetic algorithm (TMAGA) in order to demonstrate the correctness of the algorithm. Monte Carlo simulations demonstrate the results of the experiments conducted with TMAGA.
Ó 2008 Elsevier Ltd. All rights reserved.
1. Introduction
One of the main areas of naval warfare is submarine warfare
which requires detection and disposing a target without getting
discovered. The only way of doing this for a submerged platform
is to listen to sound sources and analysis of these tonals. Passive
sonar is the primary sensor for a platform to be used for detection.
The tracking of marine platforms using only passive sonar measurements is generally named as target motion analysis (TMA).
Estimation of the target kinematics (i.e. position, course and speed)
of a particular marine platform in a given time with a sequence of
measurements is the primary result of TMA solution. Generally, a
measurement consists of bearing and frequency values obtained
by processing the hydrophone signals through a beam forming
process. Beam forming process relies on the assumption that every
marine vessel radiates acoustic noise originated from its engines or
other equipment and receiving this noise by own hydrophones.
The signal received by the hydrophones commonly has a low signal-to-noise ratio due to the background ocean noise, ownship
noise and ambient reflections (Streit & Walsh, 2002). In addition,
the bearings have a relative uncertainty factor associated with
them (Cunningham & Thomas, 2005).
* Corresponding author. Tel.: +90 262 605 1416; fax: +90 262 653 8490.
E-mail addresses: lince@armerk.dzkk.tsk.mil.tr (L. Ince), bsezen@gyte.edu.tr (B.
Sezen), esaridogan@armerk.dzkk.tsk.mil.tr (E. Saridogan), h.ince@gyte.edu.tr (H.
Ince).
0957-4174/$ - see front matter Ó 2008 Elsevier Ltd. All rights reserved.
doi:10.1016/j.eswa.2008.02.046
Bearings-only target tracking has been an active area of research for several decades. In this paper, we introduce a different
approach to determine target kinematics in TMA applications. Instead of using classical methods such as Kalman Filter, Extended
Kalman Filter (Balckman & Popoli, 1999; Peters, 2001) and their
variations (Grossman, 1991; Kronhamn, 1998; Moon & Stevens,
1996; Peach, 1995), an evolutionary computing technique, namely
the genetic algorithms (GA), is used to establish tracks from bearing-only acoustic contacts.
The concept is an extension of the use of inversion methods to
determine environmental properties and of matched field signal
processing (Berman, 1997; Searle, 2005). In this sense, the full
set of sensor-bearing information is taken as a set of measurements and a simulation is matched to the measurements. The basic
idea behind this concept is to take some measurements and to run
a simulation of the underwater tactical situation, and then to let
the simulation change its parameters until the output matches
with the measurement data. When the output of the simulation
(i.e. the replica data) matches with the real measurement with a
predefined degree of error, it is expected that the simulation
resembles the real situation. At this point, the TMA problem is considered as an optimization problem on a large parameter space and
genetic algorithms is used to solve it. It must also be noted here
that, as the genetic algorithms provides a solution (i.e. the global
minima) that is not necessarily the real solution, but very close
to it, an additional Monte Carlo simulation is required to obtain a
better solution.
3867
L. Ince et al. / Expert Systems with Applications 36 (2009) 3866–3879
In this study, a genetic algorithm based methodological procedure and its application called target motion analysis with genetic
algorithms (TMAGA) are developed. First, the TMA and evolutionary computing concepts are summarized in the next section, and
then, TMAGA application is explained. Finally, the solutions produced by GA and the test results conducted by TMAGA are
exhibited.
is called bearing and measured with respect to the true north (0° or
360°) which is equivalent to positive Y axis on Cartesian coordinates. At time t, this angle is equal to:
YðtÞ Y o ðtÞ
! hðtÞ
hðtÞ ¼ arctan
XðtÞ X o ðtÞ
Y 0 þ V y t ðY o0 þ V oy tÞ
ð1Þ
¼ arctan
X 0 þ V x t ðX o0 þ V ox tÞ
2. Target motion analysis
It is very well-known that the above mentioned target kinematics
parameters are observable only if the own platform motion has a
higher order derivative than that of the target motion (Ancker,
1958; Porat & Friedlander, 1988). This means that if the target is
making a straight line motion with a constant speed, ownship must
ensure acceleration. The easiest and most practical way to provide
this acceleration is making a turn. So, a change of the heading of
ownship must be introduced to make the target parameters observable. This behavior of ownship will be one of the basic acceptance
that our algorithm relies upon. After briefly describing the observability issue in TMA in the next paragraph, we will not elaborate
it and keep assuming target moves on a straight path with a constant velocity and own platform performs a turn in order to solve
the associated TMA problem.
For a better understanding of the TMA problem, it is necessary
to mention the observability issue in target tracking. Observability
can simply be defined as the ability to find a unique target parameters set (i.e., a unique solution) from n target bearings. If there is
more than one parameter set, or one parameter approaches to zero
or an infinite value, it is said that the parameters are not observable. As it is mentioned above, target kinematics parameters are observable only if the own platform has a higher order derivative
than that of the target. Consider the case in which both the target
and the ownship have uniform straight line paths.
When we multiply the numerator and denumerator of the following proportion in Eq. (1)
Y 0 þ V y t ðY o0 þ V oy tÞ
X 0 þ V x :t ðX o0 þ V ox tÞ
TMA is a method of tracking a moving platform by using information obtained only by passive means. It is used in cases either
the position and speed of the moving platform cannot be directly
measured or it is strictly undesirable to use active means such as
radar or active sonar. TMA is highly preferred by submarines or
other underwater deployed sensors where stealthness is a must.
In this context, TMA is considered as a tracking method and used
to track the targets trajectory. In a general TMA problem, the bearing and incoming frequency of the acoustic signal radiated from
the target are measured, and, using these measurements, position,
course, speed and signal frequency (after compensating Doppler
effect) radiated by the target, are estimated assuming that the target is in linear motion with a constant velocity. In this paper, we
left the frequency of signals radiated from the target out of our
interest, as we only deal with the kinematics information. The
geometry and mathematical model of the target and the measuring
platform motions are described below.
A target is moving with a constant velocity along a straight line
on a plane defined as x–y plane (see Fig. 1). The target kinematics
parameters under consideration are: the initial point of the straight
line (at t0) on Cartesian coordinates X0 and Y0 and Cartesian velocity components Vx, Vy, which are derived from the heading and
speed of the target. At any time t, the position of the target (Xt
and Yt) can be deduced from these parameters
V x ¼ S cosðCÞ
V y ¼ S sinðCÞ
Xt ¼ X0 þ V x t
Yt ¼ Y0 þ Vy t
where S denotes the speed and C is the course of the target in degrees from 0 to 360.
A passive sensor, the onboard measuring platform (ownship),
moving on the same x–y plane, collects n number of angular measurements at positions Xo(t) and Yo(t) which are T seconds apart.
The angle h, of the line connecting the target center to the ownship
with a constant number c, although the values of X0, Vx, Xo, Y0, Vy, Yo,
Yo0, Voy, Xo0, Vox parameters change to cX0, cVx, cXo, cY0, cVy, cYo, cYo0,
cVoy, cXo0, cVox, respectively, the proportion would not change and
we would obtain the same bearings from completely two different
set of parameters. Therefore, for different values of c, there would
be an infinite number of solutions.
To show the problem visually, consider the following cases:
In Fig. 2a–c the target is observable since the own platform has
a higher order derivative than that of the target while the case in
Y
Target trajectory on a straight line
Own ship trajectory with two legs
C, S
Rt
Θt
Xt, Yt
R0
Θ0
Θ1
X1, Y1
Xt, Yt: Target position at time t.
: Bearing of target at time t.
Θt
: Range of target at time t.
Rt
: Course of target
C
S
: Speed of target.
X0, Y0
X
Fig. 1. Simple TMA diagram.
3868
L. Ince et al. / Expert Systems with Applications 36 (2009) 3866–3879
a
b
Own-ship on a straight line
Y
Own-ship making a leg
Y
Steady target
Target on a straight line
X
c
Y
Own-ship accelerating on straight lines
and making a leg
acc
X
d
Own-ship on a straight line
Y
0
acc = 0
acc = 0
acc
0
Target making a turn
Target on a straight line
X
X
Fig. 2. Different observability situations.
Fig. 2d is not observable because of the same order of ownship and
target motion.
Fig. 2a shows the situation related to the estimation of the position of a steady target. In this case, it is required that the own platform moves along a straight line with a uniform speed for enough
time to observe the target under a reasonably wide baseline. Fig. 2b
indicates the situation of a moving target along a straight line with
constant speed; where the ownship is required to perform a
maneuver. Finally, the situation with a maneuvering target is displayed in Fig. 2c. Still, it is necessary that the ownship motion
has a one-degree of freedom above that of the target, which is
ownship is accelerating along the straight lines. An example (the
most common situation) of unobservable target parameters is
shown in Fig. 2d. Here both ownship and the target are moving
on a straight line with a constant speed.
Also we should mention the degree of observability. A situation
is considered highly observable if the difference between two consecutive bearings (bearing rate) is high. This situation occurs if the
target is not too far away, the speed is sufficiently high or the
velocity vector is perpendicular to line of sight. Briefly, any condition increasing the bearing rate creates a higher observability. On
the other hand, low observability occurs if the bearing rate is small.
Among the parameters that affect the bearing rate, range comes
first. So as a general belief, it is always difficult to solve TMA problem for further targets. Our algorithm performed quite well even
for targets under lower observability conditions as we showed in
our tests.
In TMA, measured target bearings (measurements) by passive
means can be processed immediately in a recursive manner or
stored for a later process with a batch processing technique. Algorithms of both recursive and batch types have been proposed (Aidala & Hammel, 1983; Fagerlund, 1980; Gong & Lindgren, 1981;
Gong & Speyer, 1985; Nardone & Lindgren, 1984; Wilhoit, 1983).
In batch processing techniques, the measurements must be stored
with corresponding time stamps and after a specific time or at an
appropriate time; the collected data are processed at once. The
processing is relatively expensive as a considerable amount of data
must be handled. In recursive methods, Kalman Filters are used
and it is unnecessary to store previous measurements. Measured
data is processed right after it is received, and therefore processing
is relatively cheaper. However this approach has many shortcomings described below.
In batch processing techniques, generally the difference between the bearing and incoming frequency of the signal, which
are determined by calculation using the estimated state variable
and actual measurements, is normalized with the standard deviation of measurement error, and the square sum of this value is used
as the evaluation value (cost function). For estimation, the least
square method is used to minimize evaluation value (Fujimoto,
Okita, & Ozaki, 1997). The recursive techniques (i.e. Kalman Filter
based techniques) are formulated when the relationship between
measurements and the state variable is expressed by a linear function, and in this case, a solution equivalent to batch processing can
be obtained. Under favorable noise, geometric and environmental
conditions, or highly observable conditions, reliable unique estimates of the target state can be obtained by recursive techniques.
However, most practical situations do not conform to these conditions. Especially underwater conditions introduce a high amount of
acoustic noise to the measurements. It is not always possible to
deal with targets under very observable conditions either. Beside
these, the inherent uncertainty in selecting appropriate mathematical models can cause instability in the estimation process. In addition, the relationship between measurements (bearings) and the
contact state (denoted by variables x, y, Vx, Vy) is nonlinear in
TMA. Therefore, any linearization procedures applied can introduce additional estimation errors.
Before the era of modern estimation techniques, a couple of ad
hoc methods have been employed for TMA which were called as
1936 and 1959 methods (Jazwinski, 1973). However, in the era
of modern estimation methods, a much more sophisticated method (the straight forward Extended Kalman Filter) has frequently
been used for TMA, usually operating in north-oriented Cartesian
coordinates and assuming a constant velocity target model with
state vector. Extended Kalman Filter (hereafter EKF) is an extended
version of classical Kalman filter, where the nonlinear function is
approximated as linear and the Kalman filter is applied using this
function.
Although being very sophisticated, even EKF method suffers
from several inadequacies (Aidala, 1979; Johnson & Cohen, 1983;
Weiss & Moore, 1980) especially in the early stages of tracking:
(a) the filter needs to be initialized with informative prior means
and covariances if a singular covariance matrix is to be avoided,
yet there is often no real collateral information on which to base
this choice and an incorrect guess influences the solution for a very
3869
L. Ince et al. / Expert Systems with Applications 36 (2009) 3866–3879
long time (Moon & Stevens, 1996), (b) until a good estimate of the
target position is obtained, the errors due to linearization are
excessive and this may cause an inaccurate estimate, and maybe
the worst of all, (c) the solution is prone to collapsing zero range
due to the feedback effects with these linearization errors (Moon
& Stevens, 1996). Other approaches to overcome the initialization
difficulties of EKF have been proposed but this time very complicated and highly nonlinear set of equations have to be dealt. Also
to overcome the anomalous behavior of filter at initialization
phase, some research has centered on the effect of coordinate system choice (Aidala & Hammel, 1983; Gong & Speyer, 1985; Springarn, 1987).
As a serious alternative to Kalman filter based TMA, matched processing algorithms have emerged in recent years (Searle, 2005). Simply, matching process in TMA can be described as follows:
For a given set of target parameters, a replica signal is simulated. This replica signal is then compared with the received signal
to produce a match score. The TMA solution process thus search
the parameter space to find those parameters which generate the
replica that is best matching with the observed signal. Typically
this search is done by performing an exhaustive search of the
parameter space. Computationally this is an extremely expensive
process. It is so expensive that, in practice very sparse parameter
domains are used and often certain parameters are assumed to
be known to restrict the search space to manageable size (Fialkowki, 2001). For example in most TMA applications presented
to end users, the user is expected to enter some upper and lower
boundaries for range and speed parameters. Even in some cases,
the speed parameters are completely expected from the user (e.g.
it is assumed that sonar operators can detect the target speed by
counting the propeller hits). Another approach to reduce the
parameter space is preprocessing the data with nearest-neighbors
algorithm to determine which replicas are most similar to the data
when projected onto signal basis functions (Ozard et al., 1993).
Our GA based TMA approach is actually a matched processing
algorithm. GA performs a better and robust search of the parameter space without any need for a user input to narrow the parameter space or a need for a more complicated algorithm.
3. Evolutionary computing, genetic algorithms
Solving a problem can be perceived as a search through a space
of potential solutions. Since generally we are after ‘‘the best” solu-
tion, we can consider problem solving as an optimization process.
For a smaller search space, classical exhaustive search methods
would work; but as the search space grows, other improved, artificial intelligence based techniques should be employed. Evolutionary algorithms are among such techniques; they are stochastic
algorithms whose search methods utilize some natural phenomena: Genetic Inheritance and Natural Selection.
There are several variants of evolutionary algorithms, and there
are also many hybrid systems which incorporate various features
of these paradigms. However the structure of any evolutionary
method is very much the same.
The evolutionary algorithm maintains a population of individuals through a number of iterations or generations. Each individual
represents a potential solution to the problem at hand. Each solution
is evaluated based on its ‘‘fitness”. Then, a new population is formed
in the next iteration by selecting most likely the individuals with the
higher fitness values (select step). Some members of the new population undergo transformations by means of genetic operators to
form new solutions. There are some unary transformations (mutation), which create new individuals by a small change in an individual. There are also high order transformations (crossover), which
create new individuals by combining parts from several (two or
more) individuals. After some number of generations the algorithm
hopefully converges to a near-optimum (reasonable) solution. The
variants of evolutionary algorithms differ from each other by the
data structures to represent individuals, the type of the operators
and the order of the operators to some degree.
Genetic algorithms (GA) are one of the most widely used variant
of evolutionary algorithms. They have been used for numerous
types of engineering optimization problems in very diverse range
of disciplines. For example it was effectively applied to the problem of optimizing the operation of a river/reservoir system for economic return (civil engineering) (King, Fahmy, & Wentzell, 2000),
database design (computer science) (Cedeno & Vemuri, 2000),
optimal scheduling of thermal power generation (electrical engineering) (Dasgupta, 2000).
GA is mostly considered a kind of global optimization technique
based on the process of getting the individuals in a population to
evolve to a better (hopefully best), more developed form. To ensure
a positive evolution (evolving of individuals to better species),
some techniques are used. We will not elaborate these techniques
here as they are well documented in various text books and tutorials (Eiben & Smith, 2003; Gwiazda, 2006; Mitchell, 1998).
Chrom osom e 1 1101100100110110
Chrom osom e 2 1101111000011110
..
..
..
..
Chrom osom e M 0101111010011110
Initialize Population
Chrom osom e 1
Chrom osom e 2
..
..
Chrom osom e M
Evaluate Fitness
Terminate
No
Perform selection
Yes
0.045
0.067
..
..
0.021
Output
solution
Chromosome 1 11011 | 00100110110
Chromosome 2 11011 | 11000011110
Offspring 1
11011 | 11000011110
Offspring 2
11011 | 00100110110
Crossover
Original offspring 1 1101111000011110
Original offspring 2 1101100100110110
Mutated offspring 1 1100111000011110
Mutated offspring 2 1101101100110110
Mutation
Fig. 3. The flow of a classical genetic algorithm application.
3870
L. Ince et al. / Expert Systems with Applications 36 (2009) 3866–3879
In order to solve a problem by GA, the problem is expressed in
such a way that potential solutions can be encoded in character
strings over an alphabet. Sometimes this is not a trivial task and
quite often the process involves some additional heuristics. In this
work, the alphabet consists only 1s and 0s. This character string is
generally called an individual or chromosome.
After deciding on the structure of an individual, a population of
random individuals (potential solutions) is formed. Each individual is then decoded (the potential solution is fetched), tested
against some metrics, and scored. The individuals with relatively
high scores are more likely to survive and be candidates for producing the offspring in the next generation. The next generation is
formed by mating some of these individuals, mutating some of
them and directly saving the remaining ones. The new population
is then decoded and scored, and this process is repeated a specific
number of times or until some termination condition is reached.
The number of times the process is repeated is called the number
of generations. The mapping of the TMA problem solutions onto
chromosomes and the operators of GA are described in subsequent sections. Fig. 3 depicts the general flow of a genetic algorithm application.
In situations where there is high error, TMA is a challenging
task. The biggest probable cause of error in acoustic contacts lies
in the environment. Environment and geographical conditions in
addition to oceanographic features cause significant problems
especially when operating in shallow water environment. Traditional techniques, such as Kalman filters can easily diverge in such
environment. In these situations, we think a stochastic approach
provides a better approach for dealing with the nature of the problem. It is an objective of the present work to provide a method and
an application for solving TMA problem under a very noisy and low
observability conditions. In the next section, we describe the details of our work.
4. TMAGA (target motion analysis with genetic algorithms)
This section describes the structure of our test bed, TMAGA, and
shows the application of genetic algorithms in TMA problem.
TMAGA is a software module consisting of some procedures and
data structures. It can track any bearing-only target as long as a
set of consistent bearings is provided. Two important variables in
the application are the number of generations and the number of
Monte Carlo trials These parameters can be changed simultaneously
or independently to increase the performance (speed) and/or precision of GA.
In subsequent sections, first we depict the structure of a chromosome (i.e., the potential solution). This structure is the way of
our representing a target’s parameters in the solution set. Later,
we explain how the application manipulates the chromosomes towards the potentially real solution.
4.1. Target motion representation – chromosome structure
As it has been stated in the previous sections, a target is always
assumed to have a straight line motion during a specific period in
order to get a TMA solution. Also the only kind of data obtained
from the passive sensors is bearing. Under these assumptions, a
chromosome representing a target’s motion consists of four parts,
namely initial horizontal and vertical positions (X0, Y0) where the
target is located when the TMA starts, the course of the target
(C), and the speed of the target (S) which is assumed to be steady
during TMA. These values are the decision variables. X0 and Y0 are
the initial position of the track at time t0 and later Xs and Ys at time
t1–tn1 are calculated using C and S values of the target. Fig. 4
shows a general structure of an individual chromosome.
( _ _ _ _ X0_ _ _ | _ _ _ Y0 _ _ _ | _ _ _ C _ _ _ | _ _ _ S_ _ _ )
Fig. 4. A chromosome modeling a targets motion.
A chromosome models a target’s motion only for a specific period of time and this period must be long enough to collect data to
solve TMA. Here, ti indicates the time at which the system accepts a
contact (i.e., bearing) from the passive sensor. It starts at t0 and
goes to tn1. The system saves the n most recent bearing data in order to use them in GA evaluation phase. When time tn1 is reached,
data collection phase ends and the solution phase starts.
In classic GA, a chromosome is generally a sequence of 1s and 0s
and we follow this tradition in TMAGA. An array data structure is
used and each bit is kept in one array. Each value (decision variable) in the chromosome (X, Y, C and S) is represented by a specific
number of bits. The length of the bit sequence to represent a variable is defined by the size of the parameter space from which the
variable can take its values. The size of the parameter space depends on maximum and minimum values of the decision variable
and the desired precision.
The number of bits to represent a decision variable is calculated
in the following way:
2m1 < ðMax MinÞ 10p < 2m 1
ð2Þ
where Max is the maximum value that a decision variable can take,
Min the minimum value that a decision variable can take, p the degree of precision, and m the number of bits to represent this decision variable.
As can be seen from this formula (formula (2)), the number of
bits and naturally the length of the chromosomes strongly depend
on the maximum and minimum values of its variables. The performance of GA is also strongly affected by the size of the chromosome. Therefore, in order to make an efficient GA application, the
size of an individual should be kept as small as possible. Especially
the maximum and minimum values of (X, Y) positions in a maritime area tend to be very large (from 0 to more than 100,000 m),
and to represent such big variables we will need large chromosomes which will eventually decrease the performance of GA. For
that reason, other methods are required to shrink the chromosome
size as far as possible by shifting the maximum and minimum values of decision variables relatively to each other. By this way the
parameter space is narrowed. The shrinking method we developed
and how it was applied to each decision variable will be explained
in proceeding paragraphs.
It is also important to mention the effect of precisions of decision variable. As can be deduced from formula (2), the higher the
precision, the longer the chromosome. Thus, practically a low precision value is preferable for a better performance of GA. Although
it seems the only thing we lose by keeping the variable precision
low is the precision of the final solution, actually we lose more
than that. Our experiments show that a higher precision in decision variables provides the facilities of a gradient type optimization
method besides the global optimization. While global optimization
allows us to visit different areas of parameter space without trapping local minima/maxima, the gradient optimization allows us to
make more precious search around the local minimum/maximum.
4.2. Creating the initial population (Initial solution set)
When TMAGA starts for the first time, a procedure is used to initialize the population. In this initial population, the number of
chromosomes (population size) and the chromosome lengths have
already been defined as default system parameters. The length of
these initial chromosomes is naturally very high because of the immense parameter space. This length is defined by an initial set of
3871
L. Ince et al. / Expert Systems with Applications 36 (2009) 3866–3879
maximum and minimum values and the precision for the decision
variables (X, Y, C, S). After the length of the chromosome is defined,
random 1s and 0s are assigned to each bit of the chromosome.
Since each chromosome is represented by an array of bits, the data
structure representing the population is an array of arrays.
4.3. Getting contact data
Since TMAGA is a test bed and using real world data would be
both costly and not always practical, there was a need for a tool
that could emulate the behavior of real targets, sensor systems
and the meteorological environment. For this reason, a high-fidelity stimulator, METEOR, developed in house, has been selected to
produce contact data.
At regular time intervals, TMAGA fetches new target bearings
from METEOR as if they are being reported from a passive sensor,
and kept in a data structure with the time stamps associated with
them. The emitted bearings by METEOR also include Gaussian errors to simulate a real sensor sending erroneous bearing data.
The standard deviations of the sensor errors are introduced to METEOR as predefined variables.
From time t0 to tN1, the bearing values and associated time
stamps of reported bearings are kept in an internal data structure.
As mentioned above, the number of consecutive bearings (N) actually tells us how much of the most recent bearing data belonging to
a specific target are being stored for further calculations. In the trials, N was generally taken as 24. As soon as the required number of
bearings is received, TMAGA starts target motion analysis.
It is obvious that in real life the value of N (i.e. the number of
bearings) is very important. First, it requires more time to collect
more bearings and it might not be always possible for underwater
vehicle to wait so long. If the number of bearings is kept low, then
the precision of the solution drops. Also the time difference between two consecutive bearings is an issue. As this difference increases, the bearing rate (the absolute value of the difference
between two consecutive bearings) increases and that situation
makes the case more observable.
4.4. Running of TMAGA to establish a track from bearing data
TMAGA is used mainly to establish a track from bearing data. It
performs this duty by using the bearing data which was produced
by METEOR and stored by the application. In TMAGA, a module
called GA engine does the job of genetic algorithms and it can be
considered the kernel of TMAGA. The main goal of GA engine is
to produce a track vector that best fits the bearing data.
The first step is to narrow the parameter space as much as possible in order to increase the speed and efficiency of the GA engine.
The GA engine is designed in such a way that, in each complete run,
the parameter space is narrowed by a specific percentage. Initially,
running the GA engine for several consecutive times with a reduced number of generation over the full parameter space ensures
a considerable reduction in the parameter space. Briefly, the concept is to use the quick, less precise runs to bracket parameter
areas of high likelihood. The reduced number of generation, which
is normally set to be at least 1000 is set at 200, and the GA engine
runs 20 times. The implementation of this approach is described
below with an example applied to the course decision variable.
Initially, the maximum and minimum values for a decision variable are defined as system parameters. After the first run of the GA
engine, an estimation of the decision variable is found. Naturally
the estimation is between the maximum and minimum values.
The distances between the estimation and the maximum and minimum values are calculated and reduced 20% by shifting the minimum value upward and the maximum value downward. The new
maximum and minimum values are set to these generated values.
This process is repeated 20 times, and eventually it is expected that
the parameter space is narrowed enough. Of course, the algorithm
is not that trivial. For example, if the result coming from GA is too
close to the maximum or minimum values, the maximum or minimum values are shifted together towards the appropriate extreme
value by a specific amount. Therefore other precautions are implemented in order to keep the real value between the maximum and
minimum values.
Table 1 gives an example of how GA narrows a parameter space
for the course decision variable of a target whose actual course is
85°. The values in the table are produced by the application. Consequently, it is observed that the parameter space is considerably
narrowed to 21° from 360° before GA engine starts.
With these 20 runs, all parameter spaces were narrowed but no
actual tracking was done yet. Therefore these runs are called as
dummy runs. Although this narrowing operation ensures a considerable performance advantage, it may be considered that it brings
up a major drawback; the parameter space may be narrowed into
an infeasible area. However, this is a very unlikely case, as it requires extremely bad GA solutions in all 20 runs. Throughout our
extensive trials, we have not seen such a case.
After the dummy runs, it is time to run GA engine with a high
number of generations value (i.e. 1000) to perform the real tracking
process. The following headings explain the steps of GA engine. It
should be recalled that the following procedures are performed
for the number of times defined by the number of generations value,
with the aim of improving the individuals in the population in order to come up with a solution.
4.4.1. Evaluating chromosomes
In the evaluation process, the bearing measurements from t0 to
tn1 are compared with the time stamped bearings hypothetically
derived from the (X, Y, C, S) family in a chromosome. For every ti,
Xi and Yi are found by recursively moving the most recent position
with a speed S and course C and the corresponding bearings are derived by the following well-known formula
YðtÞ Y o ðtÞ
ð3Þ
hðtÞ ¼ arctan
XðtÞ X o ðtÞ
Then the difference between the squares of these bearings and the
measured bearings are calculated. This difference is called deviation. The sum of the deviations of all bearing pairs is called total
deviation. Fig. 5 shows how the total deviation is calculated.
Table 1
Demonstration of the parameter space narrowing algorithm applied to the course
decision variable
#
Max
GA result
Min
Parameter space
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
360
304
276
236
206
236
213
208
191
176
164
152
140
130
120
112
106
103
98
94
81
163
76
85
206
120
190
124
118
116
106
94
89
79
81
80
92
78
77
86
0
16
45
51
58
88
94
113
115
103
106
94
82
79
69
71
74
78
73
73
360
288
231
185
148
148
119
95
76
73
58
58
58
51
51
41
32
25
25
21
In this particular case, the actual course was 85°.
3872
L. Ince et al. / Expert Systems with Applications 36 (2009) 3866–3879
T
Δi : The difference between the
square of a measured bearing
and the square of a calculated
bearing at time ti.
Δi =
(θ
2
M
− θ G2 )
Τotal Deviation=ΣΔi
θ0
θ1 …………………………….θN
: A Bearing reported by the sensor (normally distributed around the track vector)
at time Ti.
: Bearings proposed by GA
: Angular difference between measured and calculated bearings.
Fig. 5. The bearings reported by METEOR and the bearings found by GA that supposedly belong to the real target.
Less total deviation means less error and a better result of TMA.
In this case the objective function of GA is to minimize this total
deviation. As GA is generally a maximization process instead of
minimization, we try to maximize (RDi)1. In GA terms, (RDi)1
is called the fitness value. In this fashion, a fitness value is found
for every chromosome in the population. At this point, we can
say that the chromosomes with relatively high fitness values are
relatively close to the ideal solution (see Fig. 6).
Next, the implementation of classical GA procedures is explained by some examples. Although the actual length of a chromosome is considerably long, for the sake of clarity it is shown
as much shorter.
4.4.2. Selection
In this procedure, the chromosomes which will survive in the
next generation are selected. Each chromosome is assigned a probability to survive value based on its fitness value. The better the fitness value of a chromosome, the higher the probability to survive
value it gets. After the probabilities to survive are found for each
chromosome, the next generation (a new population) is formed from
the chromosomes which pass the selection process. In selection process, chromosomes are selected based on their probability to survive
values. Although every chromosome, even the worst one has a
chance to pass the selection, the chromosomes with higher associated probabilities will have more chance. After the selection is completed, the old population is replaced with this new population.
4.4.3. Crossover
As stated in Section 1, crossover is a classical procedure of the
GA. In this procedure, the whole population is scanned and some
chromosomes are selected with a preset probability as parents to
reproduce offspring. This probability is called crossover probability
and it is also defined as a system parameter.
In crossing over, a pair of chromosomes (parents) is cut into two
pieces at a random point (cut point). The two offspring are established from these parents by exchanging the divided pieces in
the following manner. For the first offspring, the first parent gives
its first piece and the second parent gives its second piece. For the
second offspring, the first parent gives its second piece and the second parent gives its first piece.
Although we cannot say that ‘‘two good parents always reproduce two much better offspring”, this operation enables a positive
evolution towards the ideal solution based on mixing and matching characteristics of the original solution set. Without crossover,
we would be stuck in a set of solution which are selected from
an initial set of random solutions just based on the relative fitness
values, and would not be able to evolve towards better solutions.
4.4.4. Mutation
This is yet another classical operation of GA. It is the mutation
process that makes GA a global optimization method. Without
mutation, the solution could be stuck in a local minimum over
the parameter space. The only way to escape from such local minima is the mutation process in which the bits of decision variables
are changed randomly with a small likelihood. In this procedure,
the whole population is scanned bit by bit and random bits are selected to be mutated. The mutation operation is unary logical NOT
operation, i.e. it is performed by simply flipping the value of the bit
(i.e. 1 become 0 and vice versa).
After repeating the above mentioned procedures as many times
as the number of generations value, GA engine ends. During this
process, among all the chromosomes of all populations (generations) the best chromosome is recorded. This is done in the following manner.
Initially, the first chromosome in the very first population is
considered as the best chromosome (i.e. solution). Afterwards, it
is compared with other chromosomes in the same population
based on its fitness value. Whenever a better chromosome is found,
the new one becomes the best and other chromosomes are compared with this one. Eventually the best chromosome of the population is found. This chromosome propagates to the next
generation and it is searched whether a better one exists in this
population. By this fashion, the best solution is tracked until the
process ends.
The X0, Y0, C and S values in the best solution are considered as
the real target parameters and this set becomes the solution of
L. Ince et al. / Expert Systems with Applications 36 (2009) 3866–3879
3873
Save the most recent
N-bearing data
Initialize
Population
Perform some
dummy runs
Tune Chromosome
lengths
Decode the
Chromosomes
GA
RUN
Conduct N Generation
of GA
Evaluate the
chrmosoms
Form a New
Population from the
previous one
Keep track of
best solution
Cross over
Mutate
Find and store best
values associated with
total deviations.
Run Monte Carlo
trials M times
Calculate weigted mean
values of TMA parameters
µ X 0 , µY0 , µC , µ S
Fig. 6. The flow diagram of TMAGA.
TMA. It should also be noted here that, this solution has a total
deviation value (the difference between measured bearings and
the bearings derived from the solution) and this value is less than
the total deviation of any other solution. This value is stored as the
total_deviation of the best solution to be used later.
Since GA is not a deterministic way of solving real time problems, other methods should be deployed in order to avoid the
drawbacks of stochastic approaches. Monte Carlo simulations are
a good way of reaching a healthier conclusion by conducting multiple individual stochastic experiments. Therefore, we have adopted
a new approach in which we run the GA engine in the above mentioned fashion multiple times in order to implement some sort of
Monte Carlo simulations. We generally take the number of Monte
Carlo trials a value between 20 and 50. The individual solutions of
every trial are stored with corresponding total deviation values.
Later, the solution in each Monte Carlo trial is assigned a weight
factor based on its total deviation. Then weighted mean of each
parameter of TMA (X0, Y0, C, S) is calculated by using the values
obtained from each individual solution. How the weight of each
solution is found and how the weighted means of TMA parameters
are calculated is shown below:
PM
j¼1 Dj
ð4Þ
ki ¼
Di
PM
PM
ðX 0ðiÞ kðiÞ Þ
ðY 0ðiÞ kðiÞ Þ
lX 0 ¼ i¼1PM
lY 0 ¼ i¼1PM
k
ðjÞ
j¼1
j¼1 kðjÞ
PM
PM
ðC ðiÞ kðiÞ Þ
ðSðiÞ kðiÞ Þ
lC ¼ i¼1PM
lS ¼ i¼1
ð5Þ
PM
j¼1 kðjÞ
j¼1 kðjÞ
M is the number of Monte Carlo runs, Di the total deviation in each
solution, ki the weight of each solution and lX0 ; lY 0 ; lC ; lS are
weighted means of TMA parameters.
3874
L. Ince et al. / Expert Systems with Applications 36 (2009) 3866–3879
8000
4500
10000
12000
14000
1600
18000
5500
X = 4.1m
C = 1.8 deg.
Y= 1.3m
S= 0.02m/s
6500
750
X=35.6m
C=3.8 deg.
Y = 210.2m
S = 0.01 m/s
8500
9500
X=25.6m
C=0.1deg.
Y=60.7m
S=0.14m/s
10500
11500
12500
13500
OWNSHIP
14500
Ownship
Ground
TMAGA
Fig. 7. Trials 1–3 tactical picture. Blue lines demonstrate ground, reds are TMAGA solution. The absolute differences are shown as well. (For interpretation of the references to
colour in this figure legend, the reader is referred to the web version of this article.)
Eventually, lX 0 , lY 0 , lC, lS values are the fruit of TMAGA and considered the final solution. The overall process of TMAGA is described in Fig. 7.
In all trials, the ownship speed parameter is kept constant. We
assume ownship speed is 2 m/s which is very close to the operational speed of a submerged diesel powered submarine ship. It is
also assumed that targets are moving along a straight line with a
constant speed and ownship makes a turn in order to satisfy
observability conditions.
The trials are realized on a X–Y coordinate system in which the
values on Y axis are in reverse order (i.e. negative side is upward
instead of positive side). The positions of ownship and targets are
given on this coordinate system. We use metric system in our measurement of distance and speed. Population size, number of gener-
5. Experiments
In this section, we show the results of various trials conducted
for validating our algorithm. The effects of measurement quality
(i.e. the variance of bearing error) and probabilistic elements of
GA (i.e. crossover, mutation probabilities) are also demonstrated
(see Fig. 8).
10000
12000
14000
16000
18000
20000
4500
5500
6500
X=3 m
C=1.8 deg.
7500
Y= 24.3m
S = 0.08
C = 120
8500
9500
X=6 m
C=0.3 deg.
Y =52.1 m
S=0.13 m/s
C=150
C = 175
X=23.5m
C=0.6 deg.
Y=44.7m
S=0.42 m/s
10500
11500
12500
13500
OWNSHIP
14500
Ownship
Ground
TMAGA
Fig. 8. Trials 4–6 tactical picture. Blue lines demonstrate ground, reds are TMAGA solution. The absolute differences are shown as well. (For interpretation of the references to
colour in this figure legend, the reader is referred to the web version of this article.)
3875
L. Ince et al. / Expert Systems with Applications 36 (2009) 3866–3879
ation, and number of Monte Carlo trials are 50, 500 and 20, respectively. On the other hand crossover and mutation probabilities are
0.2 and 0.1, respectively, on normal trials (see Fig. 9).
In every trial, although one solution is enough, we solve the problem 20 different times (independent of each other) in order to show
the consistency of the solutions. Each independent solution took 1–
3 min on a Pentium 4, Windows XP machine (see Fig. 10).
The independent solutions are listed only for the first three trials (see Table 2) for the sake of saving text space. The average of
these 20 independent solutions and the deviation of decision variables from the ground truth parameters (as absolute difference and
percentage deviation) are found as well and attached to the end of
12000
13000
Table 2
Ownship parameters through Trial 1 to Trial 3
Ownship
Speed = 2 m/s
X0 = 14,000, Y0 = 14,000
STD = 0.3
LEG 1
Course = 345
12 Bearings collected,
Measurement, every 15 s
LEG 2
Course = 70
12 Bearings collected
that table. To calculate the percentage deviation of a decision variable, we divide the absolute difference value to the difference of
14000
15000
16000
17000
18000
8500
9500
10500
= 0 .5
=0
X=29.4 m
C=0.3 deg.
X=7.3 m
C=1.1 deg.
Y=12.5 m
S = 0.11 m/s
=1
Y=89.1 m
S=0.05 m/s
X= 51.2 m
C=4 deg.
Y = 46.2 m
S= 0.55 m/s
11500
12500
13500
OWNSHIP
14500
Ownship
Ground
TMAGA
Fig. 9. Trials 7–9 tactical picture. Blue lines demonstrate ground, reds are TMAGA solution. The absolute differences are shown as well. (For interpretation of the references to
colour in this figure legend, the reader is referred to the web version of this article.)
12000
8500
13000
14000
15000
16000
17000
18000
=2
X=78.1 m
C=0.2 deg.
Y=183 m
S=0.74 m/s
9500
10500
=5
X = 63.3 m
C=5.9 deg.
Y=739.6 m
S=0.77 m/s
11500
=30
X=450 m
C=46 deg.
12500
Y= 2136.6m
S = 0.17 m/s
13500
OWNSHIP
14500
Ownship
Ground
TMAGA
Fig. 10. Trials 10–12 tactical picture. Blue lines demonstrate ground, reds are TMAGA solution. The absolute differences are shown as well. (For interpretation of the
references to colour in this figure legend, the reader is referred to the web version of this article.)
3876
L. Ince et al. / Expert Systems with Applications 36 (2009) 3866–3879
maximum and minimum values of that decision variable. So, to
find the percentage deviations of X0, Y0, course and speed decision
variables, we divide the absolute differences to 20,000 m,
20,000 m, 360° and 25 m/s, respectively.
The average of the solutions is put in an MS Excel chart to make
a visual comparison between the ground truth and GA solution.
Since we put the ownship movement on the chart, it is possible
to see the tactical picture on these charts as well. We combine
the tables and Excel charts of similar trials in order to save room
and provide a visual comparison of different trials for the reader.
increase the bearing rate. In Table 3, we show the results of 20 different runs in each trial, their averages, and their deviation from
ground truth as absolute difference and percentage deviation.
As it is seen from the chart, TMAGA is most successful in the
case where bearing rate is relatively high (the closest target). For
the middle target, where bearing rate is lowest, TMAGA is least
successful. But in all cases, the results are quite acceptable.
5.1.2. Trials 4–6
In these trials, we test TMAGA for targets with different angle
on the bows. All the targets start at the same position and approach
to own ship with 5° (C = 175°), 30° (C = 155°) and 60° (C = 120°) angle on the bows in order. The speed is constant and 10 m/s (18 kts).
Ownship parameters are same with the previous trials. Although
we run TMAGA 20 times for each trial, we only show the averages
and deviations in the following tables (see Table 4).
Although 5° angle on the bow case is very little observable,
TMAGA is very successful even in this case. In all cases, TMAGA
is very successful.
5.1. Validation of TMAGA
We solve the TMA of targets with different kinematics parameters in validation tests. The kinematics parameters are chosen such
that they resemble the most common TMA cases in real life and
validate the algorithm with different observability cases.
5.1.1. Trials 1–3
Through Trials 1–3, we solve the TMA of targets creating different degrees of bearing rate. In all trials, the ownship movement is
unique and the corresponding ownship parameters are given in Table 2. In Trial 1, a target is located 8000 m away from ownship and
moving East with a speed of 10 m/s (18 kts). The bearing rate in
this trial is moderate. In Trial 2, we pull the target to 7200 m and
decrease the speed from 10 m/s to 6 m/s to lessen the bearing rate.
In Trial 3, the course and speed parameters are the same with those
in Trial 2 but we pull the target from 7200 m distance to 4000 m to
5.2. Effect of measurement data quality
Now we will see the effect of measurement errors in TMA calculation. As it is expected, as the measurement error increases,
the result of TMA deviates from ground truth. In the following trials, we will solve the TMA of a target with data contaminated with
different degrees of error. The target and ownship parameters will
be steady through the Trials 7–12. The only thing that is changed is
Table 3
Trials 1–3 results
#
Trial 1
Trial 2
Trial 3
X0
Y0
Course
Speed
X0
Y0
Course
Speed
X0
Y0
Course
Speed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
11940.6
11950.1
11958.5
11969.6
11978.2
11986.1
11985.8
11993.5
11988.8
12003.4
12010.5
12008.6
12019.3
12019.0
12024.8
12036.8
12043.5
12045.6
12059.9
12063.3
6011.1
6053.0
6089.7
6121.5
6162.7
6196.6
6197.5
6202.9
6204.0
6254.8
6281.1
6288.8
6313.6
6320.4
6331.6
6381.8
6405.2
6421.3
6468.6
6480.8
91.2
91.2
91.4
92.0
91.5
91.3
91.3
92.2
91.6
91.9
92.3
91.6
91.9
91.8
92.3
92.4
92.0
91.9
92.3
92.3
10.31
10.26
10.21
10.15
10.11
10.07
10.08
10.03
10.06
9.98
9.94
9.96
9.90
9.91
9.87
9.81
9.79
9.77
9.70
9.68
12994.7
13003.2
13020.3
13028.4
13019.7
13020.3
13034.6
13025.8
13029.8
13036.9
13036.2
13038.9
13043.4
13048.9
13042.4
13045.8
13055.0
13055.3
13063.8
13069.7
6942.2
6981.4
7014.9
7110.3
7127.7
7130.4
7147.7
7180.4
7187.4
7208.2
7231.3
7234.8
7242.1
7262.0
7282.5
7301.5
7344.3
7378.8
7448.4
7470.8
81.4
85.0
93.0
90.3
83.8
83.9
88.5
82.9
87.5
86.9
82.4
86.7
87.3
90.5
82.2
85.3
86.6
86.5
85.5
87.4
6.30
6.18
5.99
6.12
6.16
6.15
6.04
6.05
6.00
5.96
6.03
5.92
5.95
5.90
6.03
5.96
5.87
5.92
5.82
5.83
12990.2
13006.5
13002.9
13003.9
13018.9
13016.9
13017.0
13012.6
13009.4
13023.2
13033.2
13029.1
13023.5
13037.5
13042.8
13042.0
13039.9
13045.4
13048.2
13069.8
10050.4
10076.7
10095.6
10115.0
10125.2
10137.5
10139.5
10144.8
10164.3
10177.8
10196.2
10198.2
10200.8
10222.2
10235.1
10244.9
10245.0
10259.1
10259.6
10324.4
89.6
91.8
89.0
87.5
91.1
90.3
90.1
88.6
86.7
89.4
90.8
90.7
88.1
91.0
90.9
90.0
88.9
90.4
90.3
92.4
6.07
5.97
6.01
6.03
5.90
5.91
5.91
5.95
5.99
5.87
5.79
5.83
5.88
5.76
5.74
5.76
5.78
5.73
5.71
5.55
Ground
Aver. Solut.
Abs. Dev.
% Dev.
12,000
12004.1
4.1
0.02
6260
6258.7
1.3
0.01
90
91.8
1.8
0.51
10
9.98
0.02
0.08
13,000
13035.6
35.6
0.14
7000
7210.2
210.2
0.84
90
86.2
3.8
1.05
6
6.01
0.0
0.04
13,000
13025.6
25.6
0.10
10120
10180.7
60.7
0.24
90
89.9
0.1
0.04
6
5.86
0.1
0.57
Table 4
Trials 4–6 results
Trial 4
Ground
Aver. Solut.
Abs. Dev.
% Dev.
Trial 5
Trial 6
X0
Y0
Course
Speed
X0
Y0
Course
Speed
X0
Y0
Course
Speed
14010
14016
6.0
0.02
6000
5947.9
52.1
0.21
175
174.7
0.3
0.09
10
9.87
0.13
0.51
14010
13986.5
23.5
0.09
6000
5955.6
44.4
0.18
150
149.4
0.6
0.15
10
10.42
0.42
1.69
14010
14013
3.0
0.01
6000
5975.7
24.3
0.10
120
121.8
1.8
0.51
10
10.08
0.08
0.33
3877
L. Ince et al. / Expert Systems with Applications 36 (2009) 3866–3879
the standard deviation of measurement errors. We will start with
zero error (r = 0°) in the first trial and increase the standard deviation (r) to 0.5°, 1°, 2°, 5° and 30° in later trials, respectively.
In Trial 12, the error rate is too high (±90 degrees). This trial is
conducted to show the behavior of TMAGA under very noisy conditions. Although the position estimation was very weak, we cannot say the same thing for course and speed. The estimated
direction of target was at least in the same quarter circle with
the ground and the speed is almost same with the ground speed.
5.2.1. Trials 7–9
In these trials the measurement errors are relatively low and
acceptable for underwater measurements. We test the zero error
case in Trial 7. In this trial the measured bearings are the line of
sight between the real target and ownship. In Trials 8 and 9, the
standard deviations of measurement errors are 0.5 and 1
respectively.
Table 5 shows that there is not much difference in the successes
of all these three trials. Although the solution is expected to be
same with ground truth in Trial 7, it is not. This is a good example
to show genetic algorithm’s being a stochastic method. Since it is
not deterministic, it does not guarantee an absolute best solution
every time. Even in case where the standard deviation of the error
is 1° (±3°) which is a little bit high, the solution is very successful.
5.3. Effect of crossover probability
The details of crossover operation are explained above.
Throughout our trials, we apply a crossover probability of 0.2
which we found an optimum value for TMA problem by trial and
error. We noticed that as the crossover probability increases, the
effect of selection operation decreases simultaneously. Since most
of the parents are forced to mate and leave their places to new offspring with high crossover probability, mating highly fit chromosomes lose their fitness as well. On the other hand, if the
crossover probability is too low, then no new offspring is produced
and all parents pass to new generations without any change.
5.2.2. Trials 10–12
The measurement errors are considerably high in these trials.
The standard deviations of measurement errors are 2, 5 and 30
(±90°) in order. Any measurement contaminated with such a big
amount of error is hardly acceptable in real life.
TMAGA is still successful in the case where standard deviation
of error is 2° (±6°). Note that the solution is found with only 24
measurements. With such a big error rate and small number of
measurements, a Kalman Filter based method even would not be
able to initialize. TMAGA may be considered successful in Trial
11 where r = 5. The target is estimated to be in the close vicinity
of ground truth. Even this solution would be more than enough
for a submarine attack team preparing for an active torpedo
launch.
5.3.1. Trials 13–15
In the following trials, we solve the TMA of the target we use in
previous trials. We use crossover probabilities of 0.01 (too low), 0.5
(relatively high) and 0.99 (too high) in order.
In all these trials, the solutions are very close to each other. As we
said, we found an optimum crossover probability of 0.2 for TMA
solution. As we go away from this optimum value, the precision of
the algorithm starts to drop. However, the solutions in high and
low crossover probabilities are also quite successful and acceptable.
Briefly, we conclude from these trials that the solution gets closer to
ground truth very precisely (as in the cases of Trials 1–6) only with
optimum crossover probability. With other low or high probabilities, the solution stays in a distance from ground (see Table 6).
Table 5
Trials 7–9 results
Trial 7 (r = 0)
Ground
Aver. Solut.
Abs. Dev.
% Dev.
Trial 8 (r = 0.5)
Trial 9 (r = 1)
X0
Y0
Course
Speed
X0
Y0
Course
Speed
X0
Y0
Course
Speed
13000
13029.4
29.4
0.12
10120
10132.5
12.5
0.05
90
89.7
0.3
0.07
10
9.89
0.11
0.03
13000
13007.3
7.3
0.03
10120
10209.1
89.1
0.36
90
88.9
1.1
0.32
10
9.95
0.05
0.21
13000
13051.2
51.2
0.20
10120
10166.2
46.2
0.18
90
94.0
4.0
1.12
10
9.45
0.55
2.20
Table 6
Trials 10–12 results
Trial 10 (r = 2)
Ground
Aver. Solut.
Abs. Dev.
% Dev.
Trial 11 (r = 5)
Trial 12 (r = 30)
X0
Y0
Course
Speed
X0
Y0
Course
Speed
X0
Y0
Course
Speed
13000
13078.1
78.1
0.31
10120
10303.0
183.0
0.73
90
89.8
0.2
0.05
10
9.26
0.74
2.95
13000
13062.3
62.3
0.25
10120
10859.6
739.6
2.96
90
84.1
5.9
1.63
10
9.23
0.77
3.07
13000
13450.8
450.8
1.80
10120
12256.6
2136.6
8.55
90
44
46
12.77
10
9.83
0.17
0.67
X0
Y0
Course
Speed
X0
Y0
Course
Speed
X0
Y0
Course
Speed
13000
13085.2
85.2
0.34
10120
10415.6
295.6
1.18
90
90.6
0.6
0.16
10
9.08
0.92
3.69
13000
13080.9
80.9
0.32
10120
10401.6
281.6
1.13
90
90.4
0.4
0.11
10
9.17
0.83
3.33
13000
13076.4
76.4
0.31
10120
10434.2
314.2
1.26
90
89.1
0.9
0.25
10
9.23
0.77
3.09
Table 7
Trials 13–15 results
Trial 13
Ground
Aver. Solut.
Abs. Dev.
% Dev.
Trial 14
Trial 15
3878
L. Ince et al. / Expert Systems with Applications 36 (2009) 3866–3879
Table 8
Trials 16–18 results
Trial 16 (P(mut) = 0.001)
Ground
Aver. Solut.
Abs. Dev.
% Dev.
Trial 17 (P(mut) = 0.5)
Trial 18 (P(mut) = 0.99)
X0
Y0
Course
Speed
X0
Y0
Course
Speed
X0
Y0
Course
Speed
13000
13004.9
4.9
0.02
10120
9898.2
221.8
0.97
90
94.9
4.9
1.20
10
10.97
0.97
3.86
13000
13026.9
26.9
0.11
10120
9985.9
134.1
0.54
90
92.6
2.6
0.73
10
10.16
0.16
0.66
13000
12993.5
6.5
0.03
10120
10038.3
81.7
0.33
90
89.5
0.5
0.14
10
10.31
0.31
1.24
Table 9
Summary of trials
Trial inputs
Results
#
Range (m)
AOB
Speed (m/s)
r
P(c.o.)
P(mu)
DX0 (m)
DY0 (m)
Drange (m)
Dcourse (°)
Dspeed (m/s)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
8000
7200
4000
8000
8000
8000
8000
8000
8000
8000
8000
8000
8000
8000
8000
8000
8000
8000
80°
80°
80°
5°
30°
60°
80°
80°
80°
80°
80°
80°
80°
80°
80°
80°
80
80
10
6
6
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
0.3°
0.3°
0.3°
0.3°
0.3°
0.3°
0
0.5°
1.0°
2.0°
5.0°
30.0°
0.3°
0.3°
0.3°
0.3°
0.3
0.3
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.01
0.5
0.99
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.001
0.5
0.99
4.1
35.6
25.6
6
23.5
3
29.4
7.3
51.2
78.1
62.3
450.8
85.2
80.9
76.4
4.9
26.9
6.5
1.3
210.2
60.7
52.1
44.4
24.3
12.5
89.1
46.2
183
739.6
2136.6
295.6
281.6
314.2
221.8
134.1
81.7
4
213
66
52
50
24
32
89
69
199
742
2184
308
293
323
222
137
82
1.8
3.8
0.1
0.3
0.6
1.8
0.3
1.1
40
0.2
5.9
460
0.6
0.4
0.9
4.9
2.6
0.5
0.02
0
0.1
0.13
0.42
0.08
0.11
0.05
0.55
0.74
0.77
0.17
0.92
0.83
0.77
0.97
0.16
0.31
5.4. Effects of mutation probability
So far in our trials, the mutation probability was 0.1 which we
think as an optimum value for TMAGA. As the mutation probability
increases, jumping from region to region in parameter space increases as well. Although this seems to be a good effect (no danger
of trapping in local minima), the parameter space is searched very
superficially. As stated before, some regions in parameter space
must be searched thoroughly like in a gradient search technique
to find the solution in TMA problem. High mutation probability
does not give opportunity to make a thorough search in specific regions. On the other hand, if the mutation probability is too low, the
danger of trapping in local minima appears (see Table 7).
5.4.1. Trials 16–18
In the following trials, we solve the TMA of the target we use in
previous trials. We use mutation probabilities of 0.001 (too low),
0.5 (relatively high) and 0.99 (too high) in order (see Table 8).
The effect of mutation probability is almost same with that of
crossover probability. As we go away from the optimum mutation
probability value (0.1), the precision of the algorithm starts to
drop. Since the solutions in high and low mutation probability
cases are still quite successful and acceptable, the effect of mutation probability on the algorithm is not so much either. The solution is only precious at optimum mutation probability. The
overall summary of the trials is given in Table 9.
6. Conclusions
An evolutionary computing approach for solving TMA problem is presented and an application program called TMAGA
was introduced. When the amount of error is high in contact
data and the observability of a target is low, stochastic methods
are required to deal with the bad characteristics of the problem.
Especially under underwater conditions, more errors are incorporated into contact data. The problems in Kalman Filter based
algorithms, especially initialization problem, do not exist in evolutionary approach. There is no need for function linearization as
well. Therefore the proposed algorithm does not have to deal
with such errors introduced by linearization processes in Kalman
Filter approaches. TMAGA solves the TMA problem by mainly
minimizing the difference between the replica signals it produced and the measured bearing data. The replica signals evolve
as the algorithm runs and eventually they approach the real
signal.
Large-scale Monte Carlo simulations were conducted to show
the performance of our algorithm and TMAGA application. Target
motion parameters were successfully estimated from the contact
data contaminated by different degrees of Gaussian noise even under low observability conditions. It took only 1.5–2 min to solve a
low observable TMA problem by genetic algorithm approach.
Future work may employ a different methodology to solve the
TMA problem of a maneuvering target. The motion of a maneuvering target can be modeled in a chromosome with turn points (Xt,
Yt) and new course (Cn) parameters beside standard (X0, Y0, C, S)
parameters we used in steady motion target. Also frequency information may be incorporated.
Multi Sensor Data Fusion in TMA solution may also be accomplished by using the measurements of various passive and active
sensors. In this approach TMAGA still produce replica signals and
the measurements taken by various discrete sensors located in different geographical location can be used to evaluate the fitness of
the replica signals. The replica signal best matching with the measurements of sensors will then be the solution of TMA.
L. Ince et al. / Expert Systems with Applications 36 (2009) 3866–3879
References
Aidala, V. J., & Hammel, S. E. (1983). Utilization of modified polar coordinates for
bearing only tracking. IEEE Transactions, AC-28, 283–294.
Aidala, V. J. (1979) Kalman filter behaviour in bearings-only tracking applications.
IEEE Transactions, AES-15, 29–39.
Ancker, C. J. (1958). Airbone direction finding – the theory of navigation errors. IRE
Transactions on Aeronautical Navigational Electron, 5(December), 199–210.
Balckman, S. S., & Popoli, R. (1999). Design and analysis of modern tracking systems.
New York, USA: Artech House Inc.
Berman, Z. (1997). A reliable maximum likelihood algorithm for bearing-only target
motion analysis. In Proceedings of the 36th conference on decision and control, San
Diego, California USA, Vol. 5 (pp. 5012–5017).
Cedeno, W., & Vemuri, V. R. (2000). Database design with genetic algorithms. In D.
Dasgupta & Z. Michalewics (Eds.), Evolutionary algorithms in engineering
applications. Berlin, Germany: Springer pub.
Cunningham, A., & Thomas, B., (2005). Target motion analysis visualisation. In Asia
pacific symposium on information visualisation, Sydney, Australia. Conferences in
research and practice in information technology, Vol. 45 (pp. 81–90). Available
online at: <http://crpit.com/confpapers/CRPITV45Cunningham.pdf>.
Dasgupta, D. (2000). Optimal scheduling of thermal power generation using
evolutionary algorithms. In D. Dasgupta & Z. Michalewics (Eds.), Evolutionary
algorithms in engineering applications. Berlin, Germany: Springer pub.
Eiben, A. E., & Smith, J. E. (2003). Introduction to evolutionary computing (natural
computing series). Berlin, Germany: Springer Verlag pub.
Fagerlund, S. (1980). Target Tracking Based on Bearing Only Measurement, LIDS-R1003, MIT Technical paper, MA.
Fialkowki, L. et al. (2001). Matched field source tracking by ambiguity surface
averaging. The Journal of the Acoustical Society of America, 110, 739–746.
Fujimoto, O., Okita, Y., & Ozaki, S. (1997). Nonlinearity-compensation extended
kalman filter and its application to target motion analysis, OKI Technology
Review. No. 159, Vol. 63 Available Online at: <http://www.oki.com/en/otr/html/
nf/otr-159-06.html>.
Gong, K. F., & Lindgren, A. G. (1981). Passive localization and motion analysis with a
state parameter constraint. In Proceedings of 15th asilomar conference on circuits,
system and computers.
Gong, K. F., & Speyer, J. L. (1985). A stochastic analysis of a modified gain extended
Kalman filter with applications to estimation with bearings only
measurements. IEEE Transactions, AC-30, 940–949.
Grossman, W., (1991). Bearings-only tracking: A hybrid coordinate system
approach. In Proceedings of the 30th conference on decision and control,
Brighton, England, Vol. 2 (pp. 2032–2037).
3879
Gwiazda, T. D. (2006). Genetic algorithms reference. Lomianki, Poland: Tomasz
Gwiazda.
Jazwinski, A. H. (1973). Stochastic processes and filtering theory. New York, USA:
Academic Press.
Johnson, G. W., & Cohen, A. O. (1983). Modified polar coordinates for ranging from
doppler and bearing measurements. IEEE Proceedings, 8, 907–910.
King, J. P., Fahmy, H. S., & Wentzell, M. W. (2000). A genetic algorithm approach for
river management. In D. Dasgupta & Z. Michalewics (Eds.), Evolutionary
algorithms in engineering applications. Berlin, Germany: Springer pub.
Kronhamn, T. R. (1998). Bearings-only target motion analysis based on a
multihypothesis Kalman filter and adaptive ownship motion control. IEE
Proceedings – Radar, Sonar and Navigation, 145(4), 247–252.
Mitchell, M. (1998). An introduction to genetic algorithms (complex adaptive systems).
Cambridge, MA, USA: MIT Press.
Moon, J. R., & Stevens, C. F. (1996). An approximate linearization approach to
bearings-only tracking. In IEE colloquium on target tracking and data fusion,
Malvern, UK, Vol. 253 (pp. 811–816).
Nardone, S. C., & Lindgren, A. G. (1984). Fundamental properties and performance of
conventional bearings only target motion analysis. IEEE Transactions, AC-29,
775–787.
Ozard, J. et al. (1993). Speed-up and detection performance of a nearest-neighbors
algorithm for matched field processing with a vertical line array. In IEEE oceans’
93 engineering in harmony with ocean (pp. 86–90).
Peach, N. (1995). Bearings-only tracking using a set of range-parameterized
extended Kalman filters. IEE Proceedings – Control Theory, 142(1), 73–80.
Peters, D. J. (2001). A Practical Guide to Level Once Data Fusion Algorithms, DREA
Technical Memorandum (TM 2001-201), DRDC Atlantic, Dartmouth N.S.
Canada.
Porat, P., & Friedlander, B. (1988). Analysis of asymptotic relative efficiency of
MUSIC algorithm. IEEE Transactions on Acoustics, Speech, and Signal Processing,
36(4), 532–544.
Searle, S. J. (2005). Efficient matched processing for localization of a moving
acoustic source. Signal Processing, 85, 1787–1804.
Springarn, K. (1987). Passive position localization estimation using the extended
Kalman filter system. IEEE Transactions, AES-23, 558–567.
Streit, R. L., & Walsh, M. J. (2002). Bearings-only target motion analysis with
acoustic propagation models of uncertain fidelity. IEEE Transactions on
Aerospace and Electronic Systems, 38, 1222–1237.
Weiss, H., & Moore, J. B. (1980). Improved extended Kalman filter design for passive
tracking. IEEE Transactions, AC-25, 807–811.
Wilhoit, G. Z. (1983). Stochastic control of a passive search for ocean vehicles, ocean
engineering. MS thesis, MIT, MA.