Scene Interpretation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 17154 Experts worldwide ranked by ideXlab platform

Zeev Smilansky - One of the best experts on this subject based on the ideXlab platform.

  • a 33 mu w 64 times 64 pixel vision sensor embedding robust dynamic background subtraction for event detection and Scene Interpretation
    IEEE Journal of Solid-state Circuits, 2013
    Co-Authors: Nicola Cottini, M Gottardi, Nicola Massari, Roberto Passerone, Zeev Smilansky
    Abstract:

    A 64 × 64-pixel ultra-low power vision sensor is presented, performing pixel-level dynamic background subtraction as the low-level processing layer of an algorithm for Scene Interpretation. The pixel embeds two digitally-programmable Switched-Capacitors Low-Pass Filters (SC-LPF) and two clocked comparators, aimed at detecting any anomalous behavior of the current photo-generated signal with respect to its past history. The 45 T, 26 μm square pixel has a fill-factor of 12%. The vision sensor has been fabricated in a 0.35 μm 2P3M CMOS process, powered with 3.3 V, and consumes 33 μ W at 13 fps, which corresponds to 620 pW/frame.pixel.

  • A 33 $\mu$ W 64 $\,\times\,$ 64 Pixel Vision Sensor Embedding Robust Dynamic Background Subtraction for Event Detection and Scene Interpretation
    IEEE Journal of Solid-State Circuits, 2013
    Co-Authors: Nicola Cottini, M Gottardi, Nicola Massari, Roberto Passerone, Zeev Smilansky
    Abstract:

    A 64 × 64-pixel ultra-low power vision sensor is presented, performing pixel-level dynamic background subtraction as the low-level processing layer of an algorithm for Scene Interpretation. The pixel embeds two digitally-programmable Switched-Capacitors Low-Pass Filters (SC-LPF) and two clocked comparators, aimed at detecting any anomalous behavior of the current photo-generated signal with respect to its past history. The 45 T, 26 μm square pixel has a fill-factor of 12%. The vision sensor has been fabricated in a 0.35 μm 2P3M CMOS process, powered with 3.3 V, and consumes 33 μ W at 13 fps, which corresponds to 620 pW/frame.pixel.

Nicola Cottini - One of the best experts on this subject based on the ideXlab platform.

  • a 33 mu w 64 times 64 pixel vision sensor embedding robust dynamic background subtraction for event detection and Scene Interpretation
    IEEE Journal of Solid-state Circuits, 2013
    Co-Authors: Nicola Cottini, M Gottardi, Nicola Massari, Roberto Passerone, Zeev Smilansky
    Abstract:

    A 64 × 64-pixel ultra-low power vision sensor is presented, performing pixel-level dynamic background subtraction as the low-level processing layer of an algorithm for Scene Interpretation. The pixel embeds two digitally-programmable Switched-Capacitors Low-Pass Filters (SC-LPF) and two clocked comparators, aimed at detecting any anomalous behavior of the current photo-generated signal with respect to its past history. The 45 T, 26 μm square pixel has a fill-factor of 12%. The vision sensor has been fabricated in a 0.35 μm 2P3M CMOS process, powered with 3.3 V, and consumes 33 μ W at 13 fps, which corresponds to 620 pW/frame.pixel.

  • A 33 $\mu$ W 64 $\,\times\,$ 64 Pixel Vision Sensor Embedding Robust Dynamic Background Subtraction for Event Detection and Scene Interpretation
    IEEE Journal of Solid-State Circuits, 2013
    Co-Authors: Nicola Cottini, M Gottardi, Nicola Massari, Roberto Passerone, Zeev Smilansky
    Abstract:

    A 64 × 64-pixel ultra-low power vision sensor is presented, performing pixel-level dynamic background subtraction as the low-level processing layer of an algorithm for Scene Interpretation. The pixel embeds two digitally-programmable Switched-Capacitors Low-Pass Filters (SC-LPF) and two clocked comparators, aimed at detecting any anomalous behavior of the current photo-generated signal with respect to its past history. The 45 T, 26 μm square pixel has a fill-factor of 12%. The vision sensor has been fabricated in a 0.35 μm 2P3M CMOS process, powered with 3.3 V, and consumes 33 μ W at 13 fps, which corresponds to 620 pW/frame.pixel.

Bernd Neumann - One of the best experts on this subject based on the ideXlab platform.

  • a robot waiter that predicts events by high level Scene Interpretation
    International Conference on Agents and Artificial Intelligence, 2014
    Co-Authors: Jos Lehmann, Bernd Neumann, Wilfried Bohlken, Lothar Hotz
    Abstract:

    Being able to predict events and occurrences which may arise from a current situation is a desirable capability of an intelligent agent. In this paper, we show that a high-level Scene Interpretation system, implemented as part of a comprehensive robotic system developed in the XXX project, can also be used for prediction. This way, the robot can foresee possible developments of the environment and the effect they may have on its activities. As a guiding example, we consider a robot acting as a waiter in a restaurant and the task of predicting possible occurrences and courses of action, e.g. when serving a coffee to a guest. Our approach requires that the robot possesses conceptual knowledge about occurrences in the restaurant and its own activities, represented in the standardized ontology language OWL and augmented by constraints using SWRL. Conceptual knowledge may be acquired by conceptualizing experiences collected in the robot’s memory. Predictions are generated by a model-construction process which seeks to explain evidence as parts of such conceptual knowledge, this way generating possible future developments. The experimental results show, among others, the prediction of possible obstacle situations and their effect on the robot actions and estimated execution times.

  • ICAART (1) - A Robot Waiter that Predicts Events by High-level Scene Interpretation
    Proceedings of the 6th International Conference on Agents and Artificial Intelligence, 2014
    Co-Authors: Jos Lehmann, Bernd Neumann, Wilfried Bohlken, Lothar Hotz
    Abstract:

    Being able to predict events and occurrences which may arise from a current situation is a desirable capability of an intelligent agent. In this paper, we show that a high-level Scene Interpretation system, implemented as part of a comprehensive robotic system developed in the XXX project, can also be used for prediction. This way, the robot can foresee possible developments of the environment and the effect they may have on its activities. As a guiding example, we consider a robot acting as a waiter in a restaurant and the task of predicting possible occurrences and courses of action, e.g. when serving a coffee to a guest. Our approach requires that the robot possesses conceptual knowledge about occurrences in the restaurant and its own activities, represented in the standardized ontology language OWL and augmented by constraints using SWRL. Conceptual knowledge may be acquired by conceptualizing experiences collected in the robot’s memory. Predictions are generated by a model-construction process which seeks to explain evidence as parts of such conceptual knowledge, this way generating possible future developments. The experimental results show, among others, the prediction of possible obstacle situations and their effect on the robot actions and estimated execution times.

  • Context-Based Probabilistic Scene Interpretation
    2010
    Co-Authors: Bernd Neumann, Kasim Terzic
    Abstract:

    In high-level Scene Interpretation, it is useful to exploit the evolving probabilistic context for stepwise Interpretation decisions. We present a new approach based on a general probabilistic framework and beam search for exploring alternative Interpretations. As probabilistic Scene models, we propose Bayesian Compositional Hierarchies (BCHs) which provide object-centered representations of compositional hierarchies and efficient evidence-based updates. It is shown that a BCH can be used to represent the evolving context during stepwise Scene Interpretation and can be combined with low-level image analysis to provide dynamic priors for object classification, improving classification and Interpretation. Experimental results are presented illustrating the feasibility of the approach for the Interpretation of facade images.

  • IFIP AI - Context-Based Probabilistic Scene Interpretation
    Artificial Intelligence in Theory and Practice III, 2010
    Co-Authors: Bernd Neumann, Kasim Terzic
    Abstract:

    In high-level Scene Interpretation, it is useful to exploit the evolving probabilistic context for stepwise Interpretation decisions. We present a new approach based on a general probabilistic framework and beam search for exploring alternative Interpretations. As probabilistic Scene models, we propose Bayesian Compositional Hierarchies (BCHs) which provide object-centered representations of compositional hierarchies and efficient evidence-based updates. It is shown that a BCH can be used to represent the evolving context during stepwise Scene Interpretation and can be combined with low-level image analysis to provide dynamic priors for object classification, improving classification and Interpretation. Experimental results are presented illustrating the feasibility of the approach for the Interpretation of facade images.

  • generation of rules from ontologies for high level Scene Interpretation
    Rules and Rule Markup Languages for the Semantic Web, 2009
    Co-Authors: Wilfried Bohlken, Bernd Neumann
    Abstract:

    In this paper, a novel architecture for high-level Scene Interpretation is introduced, which is based on the generation of rules from an OWL-DL ontology. It is shown that the object-centered structure of the ontology can be transformed into a rule-based system in a native and systematic way. Furthermore the integration of constraints - which are essential for Scene Interpretation - is demonstrated with a temporal constraint net, and it is shown how parallel computing of alternatives can be realised. First results are given using examples of airport activities.

J.r. Miller - One of the best experts on this subject based on the ideXlab platform.

  • Toward laser pulse waveform analysis for Scene Interpretation
    IEEE International Conference on Robotics and Automation 2004. Proceedings. ICRA '04. 2004, 2004
    Co-Authors: Nicolas Vandapel, Omead Amidi, J.r. Miller
    Abstract:

    Laser based sensing for Scene Interpretation and obstacle detection is challenged by partially viewed targets, wiry structures, and porous objects. We propose to address such problems by looking at the laser pulse waveform. We designed a new laser sensor with off-the-shelf components. We report on the design and the evaluation of this low cost and compact sensor, suitable for mobile robot application. We determine classical parameters such as operation range, repeatability, accuracy, resolution, but we also analyze laser pulse waveforms modes and mode shape in order to extract additional information on the Scene.

  • ICRA - Toward laser pulse waveform analysis for Scene Interpretation
    IEEE International Conference on Robotics and Automation 2004. Proceedings. ICRA '04. 2004, 2004
    Co-Authors: Nicolas Vandapel, Omead Amidi, J.r. Miller
    Abstract:

    Laser based sensing for Scene Interpretation and obstacle detection is challenged by partially viewed targets, wiry structures, and porous objects. We propose to address such problems by looking at the laser pulse waveform. We designed a new laser sensor with off-the-shelf components. We report on the design and the evaluation of this low cost and compact sensor, suitable for mobile robot application. We determine classical parameters such as operation range, repeatability, accuracy, resolution, but we also analyze laser pulse waveforms modes and mode shape in order to extract additional information on the Scene.

Roberto Passerone - One of the best experts on this subject based on the ideXlab platform.

  • a 33 mu w 64 times 64 pixel vision sensor embedding robust dynamic background subtraction for event detection and Scene Interpretation
    IEEE Journal of Solid-state Circuits, 2013
    Co-Authors: Nicola Cottini, M Gottardi, Nicola Massari, Roberto Passerone, Zeev Smilansky
    Abstract:

    A 64 × 64-pixel ultra-low power vision sensor is presented, performing pixel-level dynamic background subtraction as the low-level processing layer of an algorithm for Scene Interpretation. The pixel embeds two digitally-programmable Switched-Capacitors Low-Pass Filters (SC-LPF) and two clocked comparators, aimed at detecting any anomalous behavior of the current photo-generated signal with respect to its past history. The 45 T, 26 μm square pixel has a fill-factor of 12%. The vision sensor has been fabricated in a 0.35 μm 2P3M CMOS process, powered with 3.3 V, and consumes 33 μ W at 13 fps, which corresponds to 620 pW/frame.pixel.

  • A 33 $\mu$ W 64 $\,\times\,$ 64 Pixel Vision Sensor Embedding Robust Dynamic Background Subtraction for Event Detection and Scene Interpretation
    IEEE Journal of Solid-State Circuits, 2013
    Co-Authors: Nicola Cottini, M Gottardi, Nicola Massari, Roberto Passerone, Zeev Smilansky
    Abstract:

    A 64 × 64-pixel ultra-low power vision sensor is presented, performing pixel-level dynamic background subtraction as the low-level processing layer of an algorithm for Scene Interpretation. The pixel embeds two digitally-programmable Switched-Capacitors Low-Pass Filters (SC-LPF) and two clocked comparators, aimed at detecting any anomalous behavior of the current photo-generated signal with respect to its past history. The 45 T, 26 μm square pixel has a fill-factor of 12%. The vision sensor has been fabricated in a 0.35 μm 2P3M CMOS process, powered with 3.3 V, and consumes 33 μ W at 13 fps, which corresponds to 620 pW/frame.pixel.