Special Sessions

Acoustic Data Fusion

Marc Oispuu

Nowadays, acoustic sensors are used in a wide range of applications. The tasks performed by acoustic sensors range from detection and classification to direction finding, localization and tracking of acoustic sound sources. Accordingly, methods of acoustic data fusion are used, for example, in the reconnaissance of drones, in smart home applications, in the tracing of missing persons or in the localization of explosions or snipers. In all these areas, scientists are confronted with similar acoustic-specific opportunities and challenges. These include the broadband characteristics of most acoustic signals, which means that the usable signal and interfering signals often significantly overlap in time and frequency. Also, the low amplitudes typical for acoustic sound sources in combination with the attenuation of the air and the frequency-dependent distance attenuation at high ambient noise levels complicates acoustic reconnaissance and leads to low sensing ranges. The comparatively low propagation speed of sound represents another peculiarity of acoustic signals that needs to be considered when performing acoustic data fusion. Thus the field is challenging and full of opportunities for a wide range of applications. 

This special session meets the growing interest of the fusion community in acoustic applications and addresses fundamental techniques, recent developments, and future research directions in the field of acoustic data fusion. The hope is to inspire the research and development of innovative approaches in the acoustic domain.

Advanced Nonlinear Filtering

Uwe Hanebeck

Methods for Bayesian inference with nonlinear systems are of fundamental interest in the information fusion community. Great efforts have been made to develop state estimation methods that are getting closer and closer to the truth. Further objectives are to increase their efficiency, reduce their requirements / assumptions, and to allow their application in more general settings.

Areas such as target tracking, guidance, positioning, navigation, sensor fusion, and decision-making usually require the use of linear or nonlinear state estimation methods (i.e., of broad interest for the information fusion community). These methods are used to provide a state estimate of a dynamic system, which is in general not directly measurable, from a set of noisy measurements.

This special session focuses on recent advances in nonlinear state estimation (filters, smoothers, and predictors) for both discrete and continuous time system models.

Context-based Information Fusion

Jesus Garcia

The goal of the proposed session is discussing approaches to context-based information fusion. It will cover the design and development of information fusion solutions integrating sensor data with contextual knowledge.

The development of IF systems inclusive of contextual factors and information offers an opportunity to improve the quality of the fused output, provide solutions adapted to the application requirements, and enhance tailored responses to user queries. Contextual-based strategy challenges include selecting the appropriate representations, exploitations, and instantiations. Context could be represented as knowledge-bases, ontologies, and geographical maps, etc. and would form a powerful tool to favor adaptability and system performance. Example applications include context-aided tracking and classification, situational reasoning, ontology building and updating.

Therefore, the session covers both representation and exploitation mechanisms so that contextual knowledge can be efficiently integrated in the fusion process and enable adaptation mechanisms. 

Directional Estimation

Florian Pfaff

Many estimation problems of practical relevance include the problem of estimating directional quantities, for example, angular values or orientations. However, conventional filters like the Kalman filter assume Gaussian distributions defined on ℝ𝑛. This assumption neglects the inherent periodicity present in directional quantities. Consequently, more sophisticated approaches are required to accurately describe the circular setting. 

This Special Session addresses fundamental techniques, recent developments, and future research directions in the field of estimation involving directional and periodic data. It is our goal to bridge the gap between theoreticians and practitioners. Thus, we include both applied and theoretical contributions to this topic.

Evaluation of Technologies for Uncertainty Reasoning

Paulo Costa

The 2023 ETUR special session will focus on exploring the connections between applied AI, uncertainty representation and reasoning within the Information Fusion context. This includes related work on  machine learning, explainability, hybrid systems, human-machine teaming, automated vehicles, cognitive security, as well as other advanced knowledge representation and reasoning techniques. 

This Special Session is intended to report the latest results of the ISIF’s ETURWG, which aims to bring together advances and developments in the area of evaluation of uncertainty representation. The ETURWG special sessions started in Fusion 2010 and have been held ever since, with an attendance consistently averaging between 30 and 50 attendees. While most attendees consist of ETURWG participants, new researchers and practitioners interested in uncertainty evaluation have attended the sessions and some stayed with the ETURWG.

Extended Object and Group Tracking

Kolja Thormann 

Traditional object tracking algorithms assume that the target object can be modeled as a single point without a spatial extent. However, there are many scenarios in which this assumption is not justified. For example, when the resolution of the sensor device is higher than the spatial extent of the object, a varying number of measurements can be received, originating from points on the entire surface or contour or from spatially distributed reaction centers. Furthermore, a collectively moving group of point objects can be seen as a single extended object because of the interdependency of the group members.

This Special Session addresses fundamental techniques, recent developments, and future research directions in the field of extended object and group tracking.

Information Fusion for situation understanding and sense-making

Lauro Snidaro

The exploitation of all relevant information originating from a growing mass of heterogeneous sources, both device-based (sensors, video, etc.) and human-generated (text, voice, etc.), is a key factor for the production of a timely, comprehensive and most accurate description of a situation or phenomenon in order to make informed decisions. Even when exploiting multiple sources, most fusion systems are developed for combining just one type of data (e.g. positional data) in order to achieve a certain goal (e.g. accurate target tracking) without considering other relevant information (e.g. current situation status) from other abstraction levels. The goal of seamlessly combining information from diverse sources including HUMINT, OSINT, and so on exists only in a few narrowly specialized and limited areas. In other words, there is no unified, holistic solution to this problem. 

Processes at different levels generally work on data and information of different nature. For example, low level processes could deal with device-generated data (e.g. images, tracks, etc.) while high level processes might exploit human-generated knowledge (e.g. text, ontologies, etc.).  The overall objective is to enhance making sense of the information collected from multiple heterogeneous sources and processes with the goal of improved situational awareness and including topics such as sense-making of patterns of behavior, global interactions and information quality, integrating sources of data, information and contextual knowledge. 

The proposed special session will bring together researchers working on fusion techniques and algorithms often considered to be different and disjoint. The objective is thus to foster the discussion of and proposals for viable solutions to address challenging problems in relevant applications.

Sensor Models and Calibration Techniques

Jannik Springer

Modern fusion algorithms process vast amounts of data from numerous different active and passive sensor systems. The sensor model that links the physical phenomenon to the sensor's output signal is of paramount importance. Often, fusion algorithms attempt to account for sensor errors due to overly simple models or improperly calibrated sensors. There is an inherent trade-off between the performance and complexity of a model. While simple models are easy to calibrate, i.e., all unknown model parameters can be easily determined, they often cannot fully capture the actual (complex) sensor response. On the other hand, the discrepancy between the actual and modeled response of a sensor can be reduced by an increasingly complex sensor model, but at the expense of costly calibration procedures. The upfront cost of calibrating a sensor can be enormous, so self-calibration procedures that reduce model deviations during sensor operation are highly desirable. The eminent importance of appropriate sensor models and corresponding (self-)calibration procedures should be considered by the international information fusion community.