Multisensor Fusion

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3309 Experts worldwide ranked by ideXlab platform

Lucy Y. Pao - One of the best experts on this subject based on the ideXlab platform.

  • Sensor Management and Multisensor Fusion Algorithms for Tracking Applications
    2003
    Co-Authors: Lucy Y. Pao
    Abstract:

    Abstract : The objective of the research under this Office of Naval Research award is to develop Multisensor management and Fusion algorithms for tracking applications. Under this award the author has achieved a number of results: (1) developed a decorrelated sequence method for distributed Fusion that is amenable to general distributed architectures; (2) compared a number of recently proposed Multisensor, multitarget tracking algorithms to better understand which algorithms perform better in certain scenarios; (3) developed variance estimation and ranking tools for efficiently comparing Multisensor Fusion algorithms; and (4) developed several schemes for controlling sensor information to achieve covariance goals when tracking interacting targets in cluttered environments. The results have provided insight as to the relative performance of various Multisensor Fusion methods, and the results also have provided a basis for assessing the tradeoffs between performance and computational and communications requirements when planning new sensor network architectures or communication link protocols. The report also lists 13 research papers that have been fully or partially funded under this contract and have been accepted for publication.

  • Distributed Multisensor Fusion Algorithms for Tracking Applications
    2000
    Co-Authors: Lucy Y. Pao
    Abstract:

    Abstract : The objective of the research under this ONR award is to develop Multisensor Fusion algorithms and sensor management techniques for tracking applications. Under this award, we have achieved a number of results: (1) We have developed a method of distributed Fusion that is amenable to general distributed architectures; (2) We have developed non-simulation techniques for comparing Multisensor Fusion algorithms that are significantly more computationally efficient than performing Monte Carlo simulation evaluations; (3) We have investigated and compared the computational complexity and tracking performance of sequential and parallel implementations of Multisensor Fusion algorithms; (4) We have investigated the order of processing sensors of unequal qualities in sequential implementations of Multisensor Fusion algorithms; (5) We have developed several schemes for controlling sensor information and have evaluated the effects of sensor request delays; and (6) We have investigated the application of ordinal optimization and super-heuristic techniques for developing efficient implementations of our new sensor management methods. Our results have provided insight as to the relative performance of various Multisensor Fusion methods, and the results have also provided a basis for assessing the tradeoffs between performance and computational and communication requirements when planning new sensor network architectures or communication link protocols.

  • Distributed Multisensor Fusion Algorithms for Tracking Applications (Year 1)
    1998
    Co-Authors: Lucy Y. Pao
    Abstract:

    Abstract : The objective of the research under this ONR award is to develop distributed Multisensor data Fusion algorithms for tracking applications, as well as non-simulation and analytical methods of performance evaluation. Since the beginning of this project in June 1997, we have achieved results in several different areas: (1) We have developed a method of distributed Fusion that is amenable to general distributed architectures; (2) We have developed two non-simulation techniques for comparing Multisensor probabilistic data association filters that are significantly more computationally efficient than performing Monte Carlo simulation evaluations; (3) We have investigated and compared the computational complexity and tracking performance of sequential and parallel implementations of the Multisensor probabilistic data association algorithm; and (4) We have developed several schemes for controlling sensor information and have evaluated the effects of delays. Our results will provide insight as to the relative performance of various Multisensor Fusion methods, and the results will also provide a basis for assessing the tradeoffs between performance and computational and communication requirements when planning new sensor network architectures or communication link protocols.

  • Centralized Multisensor Fusion algorithms for tracking applications
    Control Engineering Practice, 1994
    Co-Authors: Lucy Y. Pao
    Abstract:

    Abstract Two single-sensor tracking algorithms, Joint Probabilistic Data Association (JPDA) and Mixture Reduction (MR), are extended for use in Multisensor multitarget tracking situations, under the assumption that the sensor measurement errors are independent across sensors. In the formulations for both Multisensor algorithms, the equations for the calculation of the data association probabilities have been put in the same form as for the JPDA, thus allowing previously developed fast JPDA computational techniques to be applicable. The computational complexity of these Multisensor algorithms is discussed, and simulation results are presented demonstrating and comparing the performances of these and other Multisensor Fusion algorithms.

  • Multisensor Fusion Algorithms for Tracking
    1993 American Control Conference, 1993
    Co-Authors: Sean D. O'neil, Lucy Y. Pao
    Abstract:

    In this paper we extend a multitarget tracking algorithm for use in Multisensor tracking situations. The algorithm we consider is Joint Probabilistic Data Association (JPDA). JPDA is extended to handle an arbitrary number of sensors under the assumption that the sensor measurement errors are independent across sensors. We also show how filtering can be handled in Multisensor JPDA (MSJPDA) without leading to an exponential increase in filtering complexity. Simulation results are presented comparing the performance of the MSJPDA with another Multisensor Fusion algorithm and with the single-sensor JPDA algorithm.

R C Luo - One of the best experts on this subject based on the ideXlab platform.

  • Multisensor Fusion and integration a review on approaches and its applications in mechatronics
    IEEE Transactions on Industrial Informatics, 2012
    Co-Authors: R C Luo, Chihchia Chang
    Abstract:

    The objective of this paper is to review the theories and approaches of Multisensor Fusion and integration (MFI) with its application in mechatronics. MFI helps the system perceiving changes of the environment and monitoring the system itself. Since each individual sensor has its own inherent defects and limitations, MFI merges the redundant information acquired by multiple sensors synergistically to provide a more accurate perception and make an optimal decision in further. The wide application spectrum of MFI in mechatronics includes the industrial automation, the development of intelligent robots, military applications, biomedical applications, etc. In this paper, the architecture and algorithms of MFI are reviewed, and some implementation examples in industrial automation and robotic applications are presented. Furthermore, sensor Fusion methods at different levels, namely, estimation methods, classification methods and inference methods, the most frequently used algorithms in previous researches with their advantages and limitations are summarized. Applications of MFI in robotics and mechatronics are discussed. Future perspectives of MFI deployment are included in the concluding remarks.

  • ISIE - Autonomous mobile robot localization based on Multisensor Fusion approach
    2012 IEEE International Symposium on Industrial Electronics, 2012
    Co-Authors: R C Luo, Wei-lung Hsu
    Abstract:

    The objective of this paper is to develop a Multisensor Fusion approach based on particle filter for autonomous mobile robot localization. Sensors used here are ultrasonic sensor and received signal strength (RSS) which is a byproduct of ZigBee communication device. Both two sensors are inexpensive compared with other range sensors.

  • Enriched Indoor Map Construction Based on Multisensor Fusion Approach for Intelligent Service Robot
    IEEE Transactions on Industrial Electronics, 2012
    Co-Authors: R C Luo, Chun C. Lai
    Abstract:

    The objective of this paper is to have an intelligent service robot that not only autonomously estimates the environment structure but also simultaneously detects the commonly recognized symbols/signs in the building. The result is an information-enriched map constructed by the environment geometry from a laser ranger and the indoor indicators from visual image. To implement this enriched map, Multisensor Fusion techniques, i.e., covariance intersection and covariance union, are tactically utilized for robust pose association and sign estimation. Furthermore, an improved alignment technique is applied to promote the mapping precision in a single simultaneous localization and mapping process with the posterior convenience. Additionally, a 2.5-D environment enriched map has been rapidly constructed with the Mesa SwissRanger. We have successfully experimentally demonstrated the proof of concept and summarized it in the conclusion.

  • Multisensor Fusion and Integration: Theories, Applications, and its Perspectives
    IEEE Sensors Journal, 2011
    Co-Authors: R C Luo, Chihchia Chang, Chun C. Lai
    Abstract:

    The decision-making processes in an autonomous mechatronic system rely on data coming from multiple sensors. An optimal Fusion of information from distributed multiple sensors requires robust Fusion approaches. The science of Multisensor Fusion and integration (MFI) is formed to treat the information merging requirements. MFI aims to provide the system a more accurate perception enabling an optimal decision to be made. The wide application spectrum of MFI in mechatronic systems includes industrial automation, the development of intelligent robots, military applications, biomedical applications, and microelectromechanical systems (MEMS)/nanoelectromechanical systems (NEMS). This paper reviews the theories and approaches of MFI with its applications. Furthermore, sensor Fusion methods at different levels, namely, estimation methods, classification methods and inference methods, are the most frequently used algorithms. Future perspectives of MFI deployment are included in the concluding remarks.

  • Multisensor Fusion and Integration Aspects of Mechatronics
    IEEE Industrial Electronics Magazine, 2010
    Co-Authors: R C Luo, Chihchia Chang
    Abstract:

    This article presents the application of Multisensor Fusion and integration in a mechatronics aspect. A review of algorithms for Multisensor Fusion is given, and an overview of the mechatronics is also described. Because of the advancements in mechanical engineering, electrical engineering, and computing technology, more and more compact and highly integrated mechatronic systems are developed. By combining Multisensor Fusion technology, the performance of mechatronic systems and the derived applications can be enhanced and extended.

Chihchia Chang - One of the best experts on this subject based on the ideXlab platform.

  • Multisensor Fusion and integration a review on approaches and its applications in mechatronics
    IEEE Transactions on Industrial Informatics, 2012
    Co-Authors: R C Luo, Chihchia Chang
    Abstract:

    The objective of this paper is to review the theories and approaches of Multisensor Fusion and integration (MFI) with its application in mechatronics. MFI helps the system perceiving changes of the environment and monitoring the system itself. Since each individual sensor has its own inherent defects and limitations, MFI merges the redundant information acquired by multiple sensors synergistically to provide a more accurate perception and make an optimal decision in further. The wide application spectrum of MFI in mechatronics includes the industrial automation, the development of intelligent robots, military applications, biomedical applications, etc. In this paper, the architecture and algorithms of MFI are reviewed, and some implementation examples in industrial automation and robotic applications are presented. Furthermore, sensor Fusion methods at different levels, namely, estimation methods, classification methods and inference methods, the most frequently used algorithms in previous researches with their advantages and limitations are summarized. Applications of MFI in robotics and mechatronics are discussed. Future perspectives of MFI deployment are included in the concluding remarks.

  • Multisensor Fusion and Integration: Theories, Applications, and its Perspectives
    IEEE Sensors Journal, 2011
    Co-Authors: R C Luo, Chihchia Chang, Chun C. Lai
    Abstract:

    The decision-making processes in an autonomous mechatronic system rely on data coming from multiple sensors. An optimal Fusion of information from distributed multiple sensors requires robust Fusion approaches. The science of Multisensor Fusion and integration (MFI) is formed to treat the information merging requirements. MFI aims to provide the system a more accurate perception enabling an optimal decision to be made. The wide application spectrum of MFI in mechatronic systems includes industrial automation, the development of intelligent robots, military applications, biomedical applications, and microelectromechanical systems (MEMS)/nanoelectromechanical systems (NEMS). This paper reviews the theories and approaches of MFI with its applications. Furthermore, sensor Fusion methods at different levels, namely, estimation methods, classification methods and inference methods, are the most frequently used algorithms. Future perspectives of MFI deployment are included in the concluding remarks.

  • Multisensor Fusion and Integration Aspects of Mechatronics
    IEEE Industrial Electronics Magazine, 2010
    Co-Authors: R C Luo, Chihchia Chang
    Abstract:

    This article presents the application of Multisensor Fusion and integration in a mechatronics aspect. A review of algorithms for Multisensor Fusion is given, and an overview of the mechatronics is also described. Because of the advancements in mechanical engineering, electrical engineering, and computing technology, more and more compact and highly integrated mechatronic systems are developed. By combining Multisensor Fusion technology, the performance of mechatronic systems and the derived applications can be enhanced and extended.

  • An overview of the issues and perspectives for Multisensor Fusion and integration in Mechatronics
    2010 IEEE International Symposium on Industrial Electronics, 2010
    Co-Authors: R C Luo, Chihchia Chang
    Abstract:

    Mechatronics is the integration of mechanical, electrical, information, communication and nano/micro level technologies in the products and manufacturing processes. An intelligent mechatronic system needs the ability to perceive information from the ambient environment and diagnose the system functions by multimodal sensors. The implementation of Multisensor Fusion and integration (MFI) in mechatronic systems can help reducing uncertainties of sensory data and resulting in more accurate perception. In this paper, an overview of components, functions and applications of Mechatronics is presented. The architecture and Fusion algorithms of MFI at different Fusion levels in a mechatronic system are also reviewed. And issues and perspectives for Multisensor Fusion and integration in Mechatronics are discussed.

Ahmed Hussein - One of the best experts on this subject based on the ideXlab platform.

  • Multisensor Fusion Localization using Extended H ∞ Filter using Pre-filtered Sensors Measurements
    2019 IEEE Intelligent Vehicles Symposium (IV), 2019
    Co-Authors: Mostafa Osman, Ricardo Alonso, Ahmed Hammam, Francisco Miguel Moreno, Abdulla Al-kaff, Ahmed Hussein
    Abstract:

    Localization system is considered one of the main components of an intelligent vehicle, and it presents a criticality from the safety and performance point of view, which requires high accuracy and precision. However, the performance of these systems is affected by sensor limitations and, more importantly, noise. The localization problem can be solved by fusing the information from multiple sensors, and the Multisensor Fusion problem is typically solved using probabilistic methods. Among these methods, Kalman filters are the most widely used. Although Kalman filters are the most widely used, the noise models of the sensors fused are assumed to be Gaussian, and the noise statistics must be known beforehand, leading to a sub-optimal performance in systems with unknown noise characteristics. Accordingly, this paper presents a robust localization system, based on the extended $\mathcal{H}_{\infty}$ filter, in order to solve the localization problem in intelligent self-driving vehicles by fusing different localization sources. The proposed approach is validated through several real-world experiments under different scenarios, and the performance of the proposed method is compared with the Extended Kalman filter under the same conditions. The obtained results demonstrate that the proposed approach outperforms the Extended Kalman filter, and validates the use of the extended $\mathcal{H}_{\infty}$ filter as a Multisensor Fusion approach for localization systems.

  • IV - Multisensor Fusion Localization using Extended H∞Filter using Pre-filtered Sensors Measurements.
    2019
    Co-Authors: Mostafa Osman, Ricardo Alonso, Ahmed Hammam, Francisco Miguel Moreno, Abdulla Al-kaff, Ahmed Hussein
    Abstract:

    Localization system is considered one of the main components of an intelligent vehicle, and it presents a criticality from the safety and performance point of view, which requires high accuracy and precision. However, the performance of these systems is affected by sensor limitations and, more importantly, noise. The localization problem can be solved by fusing the information from multiple sensors, and the Multisensor Fusion problem is typically solved using probabilistic methods. Among these methods, Kalman filters are the most widely used. Although Kalman filters are the most widely used, the noise models of the sensors fused are assumed to be Gaussian, and the noise statistics must be known beforehand, leading to a sub-optimal performance in systems with unknown noise characteristics. Accordingly, this paper presents a robust localization system, based on the extended $\mathcal{H}_{\infty}$ filter, in order to solve the localization problem in intelligent self-driving vehicles by fusing different localization sources. The proposed approach is validated through several real-world experiments under different scenarios, and the performance of the proposed method is compared with the Extended Kalman filter under the same conditions. The obtained results demonstrate that the proposed approach outperforms the Extended Kalman filter, and validates the use of the extended $\mathcal{H}_{\infty}$ filter as a Multisensor Fusion approach for localization systems.

Alfred O. Hero - One of the best experts on this subject based on the ideXlab platform.

  • Simultaneous detection of lane and pavement boundaries using model-based Multisensor Fusion
    IEEE Transactions on Intelligent Transportation Systems, 2000
    Co-Authors: Sridhar Lakshmanan, Alfred O. Hero
    Abstract:

    Treats a problem arising in the design of intelligent vehicles: automated detection of lane and pavement boundaries using forward-looking optical and radar imaging sensors mounted on an automobile. In previous work, lane and pavement boundaries have always been located separately. This separate detection strategy is problematic in situations when either the optical or the radar image is too noisy. We propose a Bayesian Multisensor image Fusion method to solve our boundary detection problem. This method makes use of a deformable template model to globally describe the boundaries of interest. The optical and radar imaging processes are described with random field likelihoods. The Multisensor Fusion boundary detection problem is reformulated as a joint MAP estimation problem. However, the joint MAP estimate is intractable, as it involves the computation of a notoriously difficult normalization constant, also known as the partition function. Therefore, we settle for the so-called empirical MAP estimate, as an approximation to the true MAP estimate. Several experimental results are provided to demonstrate the efficacy of the empirical MAP estimation method in simultaneously detecting lane and pavement boundaries. Fusion of multi-modal images is not only of interest to the intelligent vehicles community, but to others as well, such as biomedicine, remote sensing, target recognition. The method presented in the paper is also applicable to image Fusion problems in these other areas.

  • ICIP (2) - Road and lane edge detection with Multisensor Fusion methods
    Proceedings 1999 International Conference on Image Processing (Cat. 99CH36348), 1999
    Co-Authors: S. Lakahmanan, Alfred O. Hero
    Abstract:

    This paper treats automated detection of road and lane boundaries by fusing information from forward-looking optical and active W-band radar imaging sensors mounted on a motor vehicle. A deformable template model is used to globally describe the boundary shapes. The optical and radar imaging processes are characterized with random field likelihoods. The Multisensor Fusion edge detection problem is posed in a Bayesian framework and a joint MAP estimate is employed to locate the road and lane boundaries. Three optimization approaches (multi-resolution pseudo-exhaustive search, Metropolis algorithm, and Metropolis algorithm with pre-tuned curvature) are proposed to implement the joint MAT estimate. Experimental results are shown to demonstrate that the joint MAP algorithm operates robustly and efficiently in a variety of road scenarios.

  • ICIP (1) - A robust Bayesian Multisensor Fusion algorithm for joint lane and pavement boundary detection
    Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205), 1
    Co-Authors: Sridhar Lakshmanan, Alfred O. Hero
    Abstract:

    In this paper we propose to simultaneously detect lane and pavement boundaries by fusing information from both optical and radar images. The boundaries are described with concentric circular models, whose parameters are compatible and will result in better conditioned estimation problems than previous parabolic models. The optical and radar imaging processes are represented with Gaussian and log-normal probability densities, with which we successfully avoid the ad hoc weighting scheme carried on the two likelihood functions. The Multisensor Fusion boundary detection problem is posed in a Bayesian framework and a joint maximum a posteriori (MAP) estimate is employed to locate the lane and pavement boundaries. Experimental results have shown that the Fusion algorithm outperforms single sensor based boundary detection algorithms in a variety of road scenarios. And it also yields better boundary detection results than the Fusion algorithm that took advantage of existing prior and likelihood formulations.