Schedule – Monday, July 6, 2015

Time Room 1Room 2Room 3Room 4Room 5Room 6
0800 - 1200T4 - Erik Blasch
Overview of High-Level Information Fusion Theory, Models, and Representations
T5 - Subrata Das
Big Data Fusion and Analytics
T10 - Thia Kirubarajan Multisensor-MultitargetTracker Development and Performance Evaluation for Realistic Scenarios T6 - Ronald Mahler
Advances in Statistical Multisource-Multitarget Information Fusion
T2 - Larry Stone, Roy Streit, Kristine Bell
Bayesian Multiple Target Tracking
T7 - Galina Rogova Information Quality in Human-Machine Integrated Environment
1200 - 1300Lunch
1300 - 1700T1 - Audun Jøsang
Fusion and Belief Reasoning with Subjective Logic
T8 - Eric Little
Applications of Scalable Semantic Technologies and Ontologies for Enhanced Higher Level Fusion
T11 - Yaakov Bar-Shalom
Multitarget Tracking and Multisensor Information Fusion
T12 - Ba-Tuong Vo
Implementations of random-finite-set-based multi-target filters
T9 - Mahendra Mallick
Space Surveillance and Space Object Tracking
T3 - Felix Govaers
An Introduction to the Distributed Kalman Filter and Track-to-Track Fusion


T1: Fusion and Belief Reasoning with Subjective Logic

Intended Audience:

People who can benefit from this tutorial are e.g. researchers, system designers and developers from industry and academia working in the following areas: Information fusion in general, Bayesian networks, Machine learning, Target and situation classification, AI and decision support tools.


This tutorial focuses on the theory and applications of subjective logic for modelling and analysing situations that are typically affected by uncertainty, incomplete knowledge as well as by potentially unreliable and deceptive sources. The advantage of subjective logic is its dual belief-Dirichlet representation which makes it possible to combine Bayesian reasoning approaches with belief-based uncertainty representation. Subjective logic thereby allows second-order uncertainty to be included in Bayesian reasoning. Subjective logic provides a rich set of operators that e.g. can be used for modelling and analysing Bayesian networks and trust networks. Subjective logic also offers methods for soliciting evidence based on statistical observation as well as on qualitative judgement. Central elements in the tutorial are:

1. Representation and interpretation of subjective opinions in terms of:
– Belief representation of opinions which can be binomial, multinomial or hypernomial
– Dirichlet and Beta probability density function representation of opinions
– Expressing opinions as PDFs (probability density functions) and fuzzy verbal categories

2. Algebraic operators of subjective logic:
– Consistency with probability theory and statistics

3. Applications of subjective logic:
– Fusion operators for various situations
– Subjective network modelling, i.e. extending Bayesian networks with second-order uncertainty
– Modelling and analysis of trust networks

The course material handed out to participants consists of:
– Presentation slides,
– Draft book on subjective logic.

No prerequisite knowledge other than basic binary logic, probability calculus and discrete mathematics is required.

Presenter: Audun Jøsang, University of Oslo.

Audun Jøsang is the author of subjective logic which was first påroposed in 1997. Subjective logic is being applied worldwide by researchers and practitioners in the areas of fusion, uncertainty reasoning, as well as trust and reputation systems. Professor Jøsang joined the Oslo University in 2008 after having worked as Associate Professor at QUT and research leader for cybersecurity at DSTC in Australia, system design engineer at Alcatel Telecom in Belgium and research scientist at Telenor in Norway. He was also Associate Professor at the Norwegian University of Science and Technology (NTNU). Prof. Jøsang has a Master’s in Information Security from Royal Holloway College, University of London, and a PhD from NTNU.

Back to top menu




T2: Bayesian Multiple Target Tracking

Intended Audience:

People wishing to understand the basic theory, results, and methods of multiple target tracking from a standard Bayesian point of view without unnecessary extensions, generalizations, or mathematical formalisms. Researchers desiring to learn how likelihood functions incorporate disparate types of information into data fusion solutions in a principled fashion.


This tutorial is based on the book, Bayesian Multiple Target Tracking 2nd Ed. Its purpose is to present the basic results in multiple target tracking from a Bayesian point of view. People who register will receive a complimentary copy of the book when they attend the tutorial.
Topics: (1) Bayesian Single Target Tracking, including priors, likelihood functions, posteriors, motion models; the Bayesian single target tracking recursion; a particle filter implementation; and examples. (2) Bayesian Multiple Target Tracking including the general Bayesian recursion for multiple target tracking; Multiple Hypothesis Tracking (MHT), Joint Probabilistic Data Association (JPDA), Probabilistic Multiple Hypothesis Tracking (PMHT), and examples. (3) Multitarget Intensity Filters (iFilters) which are an extension of Probability Hypothesis Density (PHD) filters. Intensity filters compute the expected number of targets per unit state space without explicitly associating contacts to targets. Topics include a standard Bayesian derivation of the iFilter recursion as well as a probability generating function approach to iFilters. (4) The Maximum A Posteriori – Penalty Function (MAP-PF) method of performing multiple target tracking. In MAP-PF, the tracker generates contacts for each target present obviating the need for data association. (5) Likelihood Ratio Detection and Tracking (LRDT) and iLRT. LRDT is Bayesian track-before-detect method that is useful in low signal-to-noise ratio (SNR) and high clutter situations. It can use both thresholded and unthresholded sensor responses and can be used as a multiple target detection and track initiation system. iLRT is a combination of LRDT and the iFilter that produces multiple target detections and tracks from the intensity function outputs of the iFilter.

Prerequisites: General familiarity with probabilistic concepts such as random variables, probability distributions, density functions, conditional probabilities, and expectations. Some familiarity with multivariate calculus and basic vector and matrix operations is also desirable.

Presenter: Lawrence D. Stone; Roy L. Streit; and Kristine L. Bell, Metron, Inc.

Lawrence D. Stone is Chief Scientist at Metron in Reston, Virginia, member of the National Academy of Engineering, fellow of the Institute for Operations Research and Management Science. The Operations Research Society of America awarded the 1975 Lanchester Prize to his book, Theory of Optimal Search. In 1986, he produced the probability maps used to locate the S.S. Central America which sank in 1857, taking millions of dollars of gold coins and bars to the ocean bottom. In 2010 he led the team that produced the probability distribution that guided searchers to the location of the underwater wreckage of Air France Flight AF447. He is a coauthor of Bayesian Multiple Target Tracking 2nd Ed. He continues to work on detection and tracking systems for the US Navy and Coast Guard including the Coast Guard’s Search And Rescue Optimal Planning System used to plan searches for people missing at sea.

Roy L. Streit is a Senior Scientist at Metron in Reston, Virginia, and Professor (Adjunct) of Electrical and Computer Engineering, University of Massachusetts–Dartmouth. Fellow of the IEEE. Research interests include multi-target tracking, multi-sensor data fusion, medical imaging, signal processing, pharmacovigilance, and business analytics. Author, Poisson Point Processes, Springer, 2010 (Chinese translation, 2013). Co-author, Bayesian Multiple Target Tracking, 2nd Edition, Artech, 2014. He has (co)-authored numerous technical papers and invited papers at international conferences. Nine U.S. patents. Before 2005, Dr. Streit was in Senior Executive Service at the Naval Undersea Warfare Center in Newport, RI. Navy Superior Civilian Achievement Award. American Society of Naval Engineers Solberg Award. Exchange Scientist DSTO, Adelaide, Australia, 1987–89. Visiting Scientist, Yale University, 1982–84. Visiting Scholar, Stanford University, 1981–82. B.A., Physics and Mathematics, East Texas State University, 1968; M.A., Mathematics, University of Missouri at Columbia, 1970; Ph.D., Mathematics, University of Rhode Island.

Kristine L. Bell is a Senior Scientist at Metron, Inc. and also holds an Affiliate Faculty position in the Statistics Department at George Mason University (GMU). She received the B.S.E.E. from Rice University in 1985, and the M.S.E.E. and Ph.D. from GMU in 1990 and 1995. Her technical expertise is in the area of statistical signal processing for source localization and tracking with applications in radar, sonar, aeroacoustics, and satellite communications. She has (co)-authored three books and over seventy journal and conference papers in these areas including Bayesian Multiple Target Tracking, 2ed.
From 1996-2009, Dr. Bell was an Associate/Assistant Professor in the Statistics Department and C4I Center at GMU. During this time she was also a visiting researcher at the Army Research Laboratory and the Naval Research Laboratory. In 2009, she received the GMU Volgenau School of Engineering Outstanding Alumnus Award. She is a Fellow of the IEEE.

Back to top menu




T3: An Introduction to the Distributed Kalman Filter and Track-to-Track Fusion

Intended Audience:

The intended audience are engineers, PhD students, or interested people who are working in the field of distributed sensor data fusion. The algorithmic and theoretical background of track-to-track fusion, tracklet fusion, and the distributed Kalman filter should be of interest for the audience. Problems, questions and specific interests are welcome for an open discussion.


The increasing trend towards connected sensors (“internet of things” and “ubiquitous computing”) derive a demand for powerful distributed estimation methodologies. In tracking applications, the “Distributed Kalman Filter” (DKF) provides an optimal solution under certain conditions. The optimal solution in terms of the estimation accuracy is also achieved by a centralized fusion algorithm which receives either all associated measurements or so-called „tracklets“. However, this scheme needs the result of each update step for the optimal solution whereas the DKF works at arbitrary communication rates since the calculation is completely distributed. Two more recent methodologies are based on the “accumulated state densities” (ASD) which augment the states from multiple time instants. In practical applications, tracklet fusion based on the equivalent measurement often achieves reliable results even if full communication is not available. The limitations and robustness of the tracklet fusion will be discussed. At first, the tutorial will explain the origin of the challenges in distributed tracking. Then, possible solutions to them are derived and illuminated. In particular, algorithms will be provided for each presented solution. The list of topics includes: Short introduction to target tracking, Tracklet Fusion, Exact Fusion with cross-covariances, Naive Fusion, Federated Fusion, Decentralized Fusion (Consensus Kalman Filter), Distributed Kalman Filter (DKF), Debiasing for the DKF, Distributed ASD Fusion, Augmented State Tracklet Fusion.

Prerequisites: Participants should have some background knowledge on basic operations in stochastic theory and linear algebra.

Presenter: Felix Govaers.

Felix Govaers received his Diploma in Mathematics and his PHD with the title “Advanced data fusion in distributed sensor applications” in Computer Science, both at the University of Bonn, Germany. Since 2009 he works at Fraunhofer FKIE in the department for Sensor Data Fusion and Information Processing where he now leads the team “Distributed Systems”.
The research of Felix Govaers is focused on data fusion for state estimation in sensor networks. This includes track-extraction, processing of delayed measurements as well as the Distributed Kalman filter and track-to-track fusion.

Back to top menu




T4: Overview of High-Level Information Fusion Theory, Models, and Representations


The information is organized in 3 one-hour sessions that cover HLIF theories (operational, functional, formal, and cognitive) mapped to representations (semantics, ontologies, axiomatics, and agents) with contemporary issues of modelling, testbeds, evaluation, and human-machine interfaces. Lesson 1 provides an Introduction on HLIF concepts including the JDL/DFIG models as going from Low-level information fusion (LLIF) (e.g., Level 1 Object Assessment) to High-level information fusion (HLIF) (e.g., Level 5 User Refinement). The overview includes HLIF models, grand challenges, and comparisons of theories, representations, and implementations. Lesson 2 focuses on Situation Awareness (SAW) Models in relation to Level 2 Situation Assessment, to include process, interpreted, and state transition models which lead to SAW projection/prediction concepts for HLIF. Discussions with examples of search and rescue, cyber analysis, and battlefield awareness are presented. Lesson 3 describes the bridge between LLIF and HLIF in Information Management for systems designs to include architectures, testbeds, and human-computer interface Issues (models, displays, interaction). Coupled with information management is information fusion system evaluation of scenario-based design, man-in-the-loop analysis, and evaluation metrics. The attendee will gain an appreciation of HLIF through the topic organization from the perspectives of numerous authors, practitioners, and developers of information fusion systems. The tutorial is organized as per the recent text:
E. P. Blasch, E. Bosse, and D. A. Lambert, Information Fusion Theory and Representations, Artech House, April 2012.

No prerequisites are needed; however a basic understanding and interest in information fusion is helpful to appreciate the current state of the art in the ISIF community.

Presenter: Erik Blasch, AFRL.

Erik Blasch received his B.S. from the Massachusetts Institute of Technology, seven M.S. degrees in engineering, business, and psychology, and a Ph.D. in electrical engineering from Wright State University (WSU). From 2000-2010, Dr. Blasch was the information fusion evaluation tech lead for the Air Force Research Laboratory (AFRL)—COMprehensive Performance Assessment of Sensor Exploitation (COMPASE) Center, WSU affiliated professor, and a reserve Air Force officer. From 2010-2012. Dr. Blasch was an exchange scientist to Defence R&D Canada at Valcartier, Quebec. In 2012, he returned to AFRL to lead Information fusion systems deployments. He compiled over 30 top ten finishes as part of robotic teams in international contests, received the IEEE Russ Bioengineering Award, and the Joseph Mignogna Data Fusion Award from the Joint Directors of Laboratories Data Fusion Group. He is a past President of ISIF, an IEEE AESS Board of Governors member, AIAA Associate Fellow, and a SPIE Fellow.

Back to top menu




T5: Big Data Fusion and Analytics

Intended Audience:

The intended audience include designers and developers of analytics systems for any vertical (e.g., defense, healthcare, finance and accounting, human resources, customer support, transportation) who work within business organizations around the world. They will find the tutorial useful as a vehicle for moving towards a new generation of big data fusion and analytics approaches.


Big data has tremendous potential to transform businesses but poses significant challenge in searching, processing, and extracting actionable intelligence. In this tutorial, I will present some techniques for fusion and analytics to process big centralized warehouse data, inherently distributed data, and data residing on the cloud. The fusion and analytics techniques to be discussed will handle both structured transactional and sensor data as well as unstructured textual data such as human intelligence, emails, blogs, surveys, etc.

As a background, this tutorial is intended to provide an account of both the cutting-edge and the most commonly used approaches to high-level data fusion and predictive and text analytics. The demos to be presented are in the areas of distributed search and situation assessment, information extraction and classification, and sentiment analyses.

Some of the tutorial materials are based on the following two books by the speaker: 1) Subrata Das. (2008). “High-Level Data Fusion,” Artech House, Norwell, MA; and 2) Subrata Das. (2014). “Computational Business Analytics,” Chapman & Hall/CRC Press.

Tutorial Topics include the following: High-Level Fusion, Descriptive and Predictive Analytics, Text Analytics, Decision Support and Prescriptive Analytics, Cloud Computing, Distributed Fusion, Hadoop and MapReduce, Natural Language Query, Big Data Query Processing, Graphical Probabilistic Models, Bayesian Belief Networks, Distributed Belief Propagation, Text Classification, Supervised and Unsupervised Classification, Deep Learning, Information Extraction, Natural Language Processing.

Prerequisites: Some background in the theory of probability and statistics, data mining, programming languages, and databases will be desired.

Presenter: Subrata Das, Machine Analytics

Subrata Das is the founder of Machine Analytics (www.machineanalytics.com), a company in the Boston area customizing big analytics and data fusion solutions for clients in government and industry. Subrata is also providing consulting services to several companies.
Subrata recently spent two years in Grenoble, France, as the manager of over forty researchers in the document content laboratory at the Xerox European Research Centre. In the past, Subrata led many projects funded by DARAP, NASA, US Air Force, Army and Navy, ONR and AFRL. In the past, Subrata held research positions at Imperial College, London, received a PhD in Computer Science from Heriot-Watt University in Scotland, and masters from University of Kolkata and Indian Statistical Institute.
Subrata has published many journal and conference articles. He is the author of five books including Computational Business Analytics, published by CRC Press/Chapman and Hall, and High-Level Data Fusion, published by the Artech House.
Subrata has published many conference and journal articles, edited a journal special issue, and regularly gives seminars and training courses based on his books. Subrata served as a member of the editorial board of the Information Fusion journal, published by Elsevier Science.

Back to top menu




T6: Advances in Statistical Multisource-Multitarget Information Fusion

Intended Audience:

Students, researchers, and practitioners who wish to understand the current world state-of-the-art in the random finite set (RFS) information fusion specialty. It will be of special interest to those interested in robotics and target tracking using challenging sensors in difficult sensing environments.


Learning Objectives: Finite-set statistics (FISST, a.k.a. random set information fusion) was first systematically described in 2007 in Statistical Multisource-Multitarget Information Fusion (an authorized Chinese-language edition appeared in 2014). Since then it has had a revolutionary impact on the field, inspiring a considerable amount of highly innovative research in over 18 nations. This includes a number of algorithms that have been shown to significantly outperform conventional approaches. At the conclusion of this tutorial, attendees will have attained a comprehensive overview of the current world state-of-the-art in the field, as described in my 2014 sequel, Advances in Statistical Multisource-Multitarget Information Fusion.
Summary: Multiplatform-multisensor-multitarget systems are multi-object systems: multiple platforms carrying multiple sensors, observing multiple targets using multiple measurements. A rigorous mathematical theory of multi-object systemsópoint process theoryóhas been available for a half-century. It is, however, inherently abstract and measure-theoretic. Luckily, one consequence of a well-known theorem is this: the instant that a point process is used in practical information fusion, it reduces to something far more suitable for such application: an RFS. The RS approach provides systematic techniques for statistically modeling information fusion systems and for deriving practical and effective information fusion algorithms based on principled statistical approximations. One consequence is a family of algorithms that are scalable to the combinatorial complexity of particular applications: Bernoulli filters, PHD filters, CPHD filters, multi-Bernoulli filters, and generalized labeled multi-Bernoulli (GLMB) filters.
Topics: The philosophy of FISST, Misconceptions about FISST, point processes and RFSs, Multitarget calculus, Multitarget statistics, Multitarget modeling and filtering, Multitarget metrology, ìClassicalî PHD/CPHD filters, Implementation of ìclassicalî PHD/CPHD filters, Multisensor PHD/CPHD filters, Jump-Markov PHD/CPHD filters, Bernoulli and multi-Bernoulli filters, Multitarget smoothers, Generalized labeled multi-Bernoulli (GLMB) filter, Unified simultaneous localization and mapping, Unified track-to-track fusion, Unified multitarget tracking and sensor-bias estimation, RFS filters for dynamic clutter and detection backgrounds, RFS filters for imaging sensors, RFS filters for superpositional sensors, RFS filters for extended targets, RFS filters for group targets, RFS filters for human-mediated information, Introduction to RFS sensor management.

Prerequisites: Basic familiarity with single-target and mulitarget tracking will be helpful, as well as basic exposure to expert-systems theory (fuzzy logic, Dempster-Shafer). While an introduction to the elements of finite-set statistics will be included, some exposure to topics such as PHD and CPHD filters will be helpful.

Presenter: Ronald Mahler, Random Sets, LLC.

Ronald Mahler has a Ph.D. in mathematics and a B.E.E. in Electrical Engineering and, from 1974-1979, was a mathematics professor at the University of Minnesota. Currently he is a private consultant. He is author or coauthor of over a hundred publications in the random set fusion specialty, including three books and nearly two dozen journal papers. Two of these are the first- and fourth-most-cited papers published in IEEE Trans. Aerospace & Electronic Sys. over the last decade. He is recipient of the 2007 Mignogna Data Fusion Award, the 2005 IEEE AESS Mimno Award, and the 2007 IEEE AESS Carlton Award. He has been listed since 2010 in Whoís Who in America and Whoís Who in the World. He was a plenary speaker at FUSION2004. Google Scholar research-impact rating: Dr. Mahlerís publications have been cited by others at least 3000 times since 2010 and at least 5000 times in total.

Back to top menu



T7: Information Quality in Human-Machine Integrated Environment

Intended Audience:

This tutorial is intended for both researchers and practitioners from a wide variety of fields
such as communication, intelligence, business processes, personal computing, health care, and databases, who are
interested in understanding the problems of information quality in information fusion and building methods for
solution of these problems.


Objective: The objective of the tutorial is to provide understanding of the problem of information quality in
information fusion, the challenges of representing and incorporating information quality into fusion based human-
machine information environment and possible approaches to meet these challenges.
Information Fusion utilizes a large amount of multimedia and multispectral information coming from geographically distributed sources to produce estimates about objects and gain knowledge of the entire domain of interest. Information to be processed and made sense of includes but is not limited to data obtained from sensors surveillance reports, human intelligence reports, operational information, and information obtained from social media and traditional open sources (newspapers, radio, TV, etc.). Successful processing of this information may also demand information sharing and dissemination, and action cooperation of multiple stakeholders. Such complex environments call for an integrated human-machine system, in which some processes are best executed automatically while for others the judgment and guidance of human experts and end-users are critical.

The problem of building such integrated systems is complicated by the fact that data and information obtained
from observations and reports as well as information produced by both human and automatic processes are of
variable quality and may be unreliable, of low fidelity, insufficient resolution, contradictory, and/or redundant.
The success of decision making in a complex fusion driven human-machine system environment depends on the
success of being aware of, and compensating for, insufficient information quality at each step of information
exchange. Thus quality considerations play an important role at each time when raw data (sensor reading, open
source, database search results, and intelligence reports) enter the system as well as when information is
transferred between automatic processes, between humans, and between automatic processes and humans.
The tutorial will discuss major challenges and some possible approaches addressing this problem. In particular it
will discuss the notion of information quality; information quality and context; present ontology of quality of
information and identify potential methods of representing and assessing the values of quality attributes,
combining these values into an overall quality measure as well as possible approaches to quality control ((how to
compensate for various information deficiencies).

Prerequisites: Basic knowledge of information fusion and uncertainty representation are helpful.

Presenter: Galina Rogova, State University at Buffalo.

Galina Rogova is a research professor at the State University at Buffalo as well as an independent consultant (DBA Encompass Consulting). She is a recognized expert in information fusion and decision making, and lectured internationally on this topic. Her other research expertise includes reasoning under uncertainty, information quality, machine learning, and image understanding. She has worked on a wide range of defense and non-defense problems such as situation and threat assessment, information quality in information fusion, computer-aided diagnosis, and understanding of volcanic eruption patters, among others. Her research was funded by multiple government agencies as well as commercial companies. She published numerous papers and edited 4 books. She served as a committee member, session chair and organizer, and tutorial lecturer for numerous International Conferences on Information Fusion as well as a member of organizing committee of the NATO ASI and NATO ARW on information fusion and decision support.

Back to top menu




T8: Applications of Scalable Semantic Technologies and Ontologies for Enhanced Higher Level Fusion

Intended Audience:

Any attendee of the conference can participate ñ there will be no presumed level of expertise. This tutorial will be particularly aimed at those researchers interested in using semantics for higher level fusion processing and algorithm development.

Description: Semantic technologies offer an increasing means to structure, organize and reason over a wide variety of data types. The emergence of cloud computing, in parallel, has provided semantics with unprecedented means for scaling these applications within large enterprise-level systems. Semantics are useful for certain kinds of modeling and reasoning, whereas statistical approaches (more commonly seen in contemporary cloud analytics engines) are more effective for other applications. Understanding the distinctions between logical and statistical approaches is key to building new higher level fusion systems in large cloud applications where one must be cognizant of the 4 Vís of cloud computing and analytics (Volume, Velocity, Variety, Varacity). This tutorial will leverage the many years of work Dr. Little has performed in applying semantics to fusion processing. This tutorial will explore, in detail, semantics and the important role they can play for higher-level fusion and provide the participants with a comprehensive view of the state of the art in semantically-driven big data analytics as well as the role semantic technologies can play in enhancing those analytics. The tutorial will offer several real world applications currently being developed and deployed (across a wide variety of customer bases) that show the power of combining semantics with other technology approaches including NLP processing, math-based graph heuristics, algorithm development and database management.
The tutorial will cover the following material: A brief history of semantics, The current W3C Semantic Web Stack of technologies and how they work together. The use of semantics for various data integration applications. The role of semantics in the 4 Vís of cloud computing and analytics. The role of Reasoning in analytics and the difference between logical and statistical approaches to reasoning. A methodology and workflow for designing and developing semantics for fusion applications. We will examine some ìbest practicesî for the development and execution of semantics in cloud systems.

Presenter: Eric Little, Modus Operandi, Inc.

Eric Little is VP and Chief Scientist at Modus Operandi in Melbourne, FL and is also an adjunct graduate professor at the NYU Polytechnic School of Engineering.  He received a Ph.D. in Philosophy and Cognitive Science in 2002 from the University at Buffalo, State University of New York.  His Post-Doctoral Fellowship at the University at Buffaloís Department of Industrial Engineering (2002-2004) focused on developing ontologies for multisource information fusion applications.  Dr. Little then spent several years (2004-2009) as Assistant Professor of Doctoral Studies in Health Policy & Education and Director of the Center for Ontology and Interdisciplinary Studies at D’Youville College in Buffalo, NY, during which time he also owned a private and highly successful consulting firm.  He left academia to work as Chief Knowledge Engineer at the Computer Task Group (CTG) in 2009 and later worked for Orbis Technologies as Director of Information Management (2010-2013).

Back to top menu




T9: Space Surveillance and Space Object Tracking

Intended Audience:

The tutorial is intended for researchers, engineers, and graduate students interested in space surveillance and space object tracking.


Objectives: Space objects (SOs) refer to Earth-orbiting satellites and space debris. Provide an up-to-date overview of space surveillance and introduce orbital mechanics for space objects. Describe the equation of motion, force models (gravity, drag, solar radiation pressure), and sensor measurement models. Present an overview of state-of-the art filtering and tracking algorithms for SO tracking.

Motivation: Space debris poses a serious threat to current and future space missions. At present, the number of SOs with size greater than one centimeter in low-Earth orbit (LEO) exceeds 300,000. LEO refers to orbits with altitudes up to 2000 km. Currently there are 1,167 (by January 31, 2014) operating satellites from various countries and more than 22,000 SOs
are tracked. More than 95% of the tracked SOs are space debris. Data on SOs from 1960 to 2000 show that the number of SOs is steadily increasing. Kessler showed that continued production of space debris will eventually lead to a chain reaction where accidental collisions will increase exponentially; creating a debris shell in LEO that will render further operations in space impossible.

Summary of Topics:
Overview of space surveillance and SO tracking: Current status of space objects and implications on future space program. Types of orbits – LEO, mid-Earth orbit, geosynchronous orbit, and highly elliptical orbit. Precision orbit determination (OD) and OD for space surveillance. Mathematical preliminary, coordinate frames and systems, and time systems. Introduction to orbit determination: Translational and rotational equations of motion for a SO. Force models – gravity, atmospheric drag force, solar radiation pressure, and thrust. Sensors and measurement models: space surveillance network, radar and optical sensor, light-time corrections, aberration, and atmospheric refraction correction. Two-body problem: Kepler’s laws, orbital elements. Initial OD from range and angle measurements, OD from angle-only measurements, Gibbs algorithm, Lambert-Euler method, and Gauss method. Review of nonlinear filtering algorithms for OD: Continuous-discrete filtering, weighted least squares or differential correction, extended Kalman filter, unscented Kalman filter, Gaussian sum filter. Review of candidate algorithms for multitarget tracking: Centralized and distributed tracking, multiple hypothesis tracking, random finite set based multitarget filtering algorithms, cardinalized probability hypothesis density and multi-Bernoulli filters.

Prerequisites: Probability and stochastic process, undergraduate physics, estimation, filtering, and
multitarget tracking.

Presenter: Mahendra Mallick.

Mahendra Mallick is an independent consultant. He received a Ph.D. degree in Quantum Solid State Theory from the State University of New York at Albany and an MS degree in Computer Science from the Johns Hopkins University. He is a senior member of the IEEE and was the Associate Editor-in-chief of the online journal of the International Society of Information Fusion (ISIF) during 2008-2009. He was member of the board of directors of the ISIF during 2008-2010. He is a co-editor and an author of the book, Integrated Tracking, Classification, and Sensor Management: Theory and Applications, Wiley-IEEE, December 2012. He was the Lead Guest Editor of the Special Issue on Multitarget Tracking in the IEEE Journal of Selected Topics in Signal Processing, June, 2013. His current research includes multisensor multitarget tracking, multiple hypothesis tracking, random finite set based multitarget filtering, space object tracking, distributed fusion, and nonlinear filtering.

Back to top menu


T10: Multisensor-Multitarget Tracker Development and Performance Evaluation for Realistic Scenarios

Intended Audience:

The proposed three-hour tutorial will be valuable to students, researchers and practicing engineers at universities, government research labs and companies who are interested in developing tracking and fusion solutions for real-world surveillance problems. This tutorial, which will be of high interest to a vast majority of participants, will complement (and follow-up on) the one typically presented by Prof Yaakov Bar-Shalom at Fusion conference where he focuses on theory.


While numerous tracking and fusion algorithms are available in the literature, their implementation and application on real-world problems are still challenging. Since new algorithms continue to emerge, rapidly prototyping them, developing for production and evaluating them on real-world (or realistic) problems efficiently are also essential. In addition to reviewing state-of-the-art tracking algorithms, this tutorial will focus on a number of realistic multisensor-multitarget tracking problems, simulation of large-scale tracking scenarios, rapid prototyping, development of high performance real-time tracking/fusion software, and performance evaluation on realistic scenarios. A unified tracker framework that can handle a number of state-of-the-art algorithms like the Multiple Hypothesis Tracking (MHT) algorithm, Multiframe Assignment (MFA) tracker and the Joint (Integrated) Probabilistic Data Association (J(I)PDA) tracker is presented. Modules for preprocessing (e.g., coordinate transformations, clutter estimation, thresholding, registration), data association (e.g., 2-D assignment, multiframe assignment, k-best assignment), filtering (e.g., Kalman filter, Interacting Multiple Model (IMM) Estimator, Unscented Kalman filter) and postprocessing (e.g., prediction, classification) are discussed. Fusion software with different architectures is also presented. Integration of sensors like radar, ESA, angle-only, PCL and AIS/ADS-B is demonstrated. Side-by-side performance evaluation of multiple algorithms using more than 30 metrics on realistic large-scale tracking scenarios is presented. A hands-on approach with end-to-end MATLAB and C/C++ software will be the cornerstone of this tutorial.

The topics will include Review of Bayesian state estimation, Multitarget tracking system architecture, Implementation of J(I)PDA/MHT/MFA trackers, Implementation of a multisensor fusion engine, Implementation of realistic simulators, Implementation of a track analytics engine, Performance evaluation of trackers (MOP/MOE), and Real-world examples.

Prerequisites: Basic knowledge of tracking and fusion concepts..

Presenter: Thia Kirubarajan.

T. Kirubarajan (Kiruba) holds the title of Distinguished Engineering Professor and holds the Canada Research Chair in Information Fusion at McMaster University, Canada. He has published about 350 research articles, 11 book chapters, one standard textbook on target tracking and four edited volumes. In addition to conducting research, he has work extensively with government departments and companies to process real data and to transition his research to the real world through his company TrackGen. As part of this, he has led the development of a number of software programs, including MultiTrack for real-time large-scale multisensor-multitarget tracking, MultiFuse for distributed tracking, and ISR360 for visualization, performance analysis and situation awareness, which have been integrated into some real systems.

Back to top menu


T11: Multitarget Tracking and Multisensor Information Fusion

Intended Audience:

Engineers and scientists.


Objectives: To provide to the participants the latest state-of-the art techniques to estimate the states of multiple targets with multisensor information fusion. In particular, low observable targets will be considered. Tools for algorithm selection, design and evaluation will be presented. These form the basis of automated decision systems for advanced surveillance and targeting. The various information processing configurations for fusion are described, including the recently solved track-to-track fusion from heterogeneous sensors.

Review of the Basic Techniques for Tracking. The Kalman, the Alpha-Beta(-Gamma) and the Extended Kalman filters: their capabilities and limitations.
Tracking Targets with Multiple Behavior Modes. The Interacting Multiple Model (IMM) estimation algorithm — a real-time implementable, self-adjusting variable-bandwidth, tracking filter.

Multisensor Data Fusion. Information processing configurations in multisensor tracking. Type I: Single sensor or reporting responsibility. Type II: Single sensor tracking followed by track-to-track fusion. The dependence of local tracking errors at independent sensors. Type III: Measurement-to-measurement association followed by central dynamic association and tracking. Type IV: Centralized tracking/fusion.

Information Matrix Fusion. A special centralized tracking/fusion configuration. Algorithms for synchronous and asynchronous sensors.
Heterogeneous Track-to-Track Fusion. T2TF from an active and a passive sensor. Why T2TF can be superior to centralized fusion.
Bias Estimation for Passive Sensors
Bias estimation for optical sensor measurements with targets of opportunity. The minimum number of sensors and targets needed.

Measurement-to-Measurement Fusion from Passive Sensors.
Statistical efficiency of composite position measurements from fusing LOS (line of sight angle) measurements. The only obtainable covariance — from the CRLB (Cramer-Rao Lower Bound) — is shown to be the actual covariance.

The course is based on the book: [1] Y. Bar-Shalom, P. K. Willett and X. Tian, {\bf Tracking and Data Fusion}, YBS Publishing, 2011, and additional notes.

Background text: [2] Y. Bar-Shalom, X. R. Li and T. Kirubarajan, {\bf Estimation with Applications to Tracking and Navigation: Algorithms and Software for Information Extraction, Wiley, 2001.

Prerequisites: Engineers/scientists with prior knowledge of basic probability and state estimation (see, e.g., [2]). This is an intensive course in order to cover several important recent advances and applications.

Presenter: Yaakov Bar-Shalom.

Yaakov Bar­Shalom was born on May 11, 1941. He received the B.S. and M.S. degrees from the Technion, Israel Institute of Technology, in 1963 and 1967 and the Ph.D. degree from Princeton University in 1970, all in electrical engineering. Currently he is Board of Trustees Distinguished Professor in the Dept. of Electrical and Computer Engineering and Marianne E. Klewin
Professor in Engineering at the University of Connecticut.
His current research interests are in estimation theory and target tracking and has published over 400 papers and book chapters in these areas and in stochastic adaptive control. He coauthored and edited 8 books.
He has consulted to numerous companies and government agencies, and originated the series of Multitarget-Multisensor Tracking short courses. He served as General Chairman of FUSION 2000, President of ISIF in 2000 and 2002 and
Vice President for Publications in 2004-13. Since 1995 he is a Distinguished Lecturer of the IEEE AESS and has given numerous keynote addresses at major national and international conferences.
He is co­recipient of the M. Barry Carlton Award for the best paper in the IEEE Transactions on Aerospace and Electronic Systems in 1995 and 2000. In 2002 he received the J. Mignona Data Fusion Award from the DoD JDL Data Fusion
Group. He was awarded the 2008 IEEE Dennis J. Picard Medal for Radar Technologies and Applications and is listed in “top authors in engineering” by academic.research.microsoft as the \#1 cited author in Aerospace Engineering.

Back to top menu




T12: Implementations of random-finite-set-based multi-target filters

Intended Audience:

Anyone who is interested in multi-target tracking.


The Finite Set Statistics framework for multi-sensor multi-target tracking has attached considerable interest in recent years. It provides a unified perspective of multi-target tracking in a very intuitive manner by drawing direct parallels with the simpler problem of single-target tracking. This framework has lead to the development of multi-target filters such as the Probability Hypothesis Density (PHD), Cardinalized PHD (CPHD), Multi-Bernoulli filters and recently, the Generalized Labeled Multi-Bernoulli filter. In this tutorial, we show how these filters are implemented and illustrate via Matlab how these filters work.  In particular, the tutorial will present the implementations of (1) Single target tracking in clutter, (2) Bernoulli filter, (3) Multi-Bernoulli filter, (4) PHD and CPHD filters, (5) Generalized Labeled Multi-Bernoulli filter.

Matlab code will be provided to the participants. It is envisaged that participants will come away with sufficient know-how to implement and apply these algorithms in their work.

Prerequisites: Working knowledge of random variable, probability density function, Gaussian distribution, and concepts such as state space models. This is sequel to Ron Mahler’s companion tutorial “Advances in Statistical Multisource-Multitarget Information Fusion”.

Presenters: Ba-Ngu Vo and Ba-Tuong Vo

Ba-Ngu Vo received his B.Sc. degree in Pure Mathematics and B.E. degree in Electrical Engineering with first class honors in 1994, and PhD in 1997. He had held various research positions before joining the department of Electrical and Electronic Engineering at the University of Melbourne in 2000. In 2010, he joined the School of Electrical Electronic and Computer Engineering at the University of Western Australia as Winthrop Professor and Chair of Signal Processing. Currently he is Professor and Chair of Signals and Systems in the Department of Electrical and Computer Engineering at Curtin University. Prof. Vo is a recipient of the Australian Research Council’s inaugural Future Fellowship His research interests are Signal Processing, Systems Theory and Stochastic Geometry with emphasis on target tracking, robotics, computer vision and space situational awareness.

Ba-Tuong Vo received the B.Sc. degree in applied mathematics and B.E. degree in electrical and electronic engineering (with first-class honors) in 2004 and the Ph.D. degree in engineering (with Distinction) in 2008, all from the University of Western Australia. He is currently an Associate Professor in the Department of Electrical and Computer Engineering at Curtin University and a recipient of an Australian Research Council Fellowship. His primary research interests are in point process theory, filtering and estimation, and multiple object filtering.

Both presenters were recipients of the 2010 Australian Museum DSTO Eureka Prize for “Outstanding Science in Support of Defence or National Security”.

Back to top menu