Smart Camera

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4104 Experts worldwide ranked by ideXlab platform

Richard Kleihorst - One of the best experts on this subject based on the ideXlab platform.

  • Demo: Mouse sensor networks, the Smart Camera
    2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras, 2011
    Co-Authors: Marco Camilli, Richard Kleihorst
    Abstract:

    This paper describes an extremely low-cost Smart Camera with imaging sensor, freely programmable DSP, power control and wired/wireless networking capabilities. The power consumption reaches from 3mW to 240mW depending on load and transmission rates and the BOM for a single device is now 25 euros. We were able to reduce both the power consumption and price by going to minimal resolution imagers (30×30 pixels) allowing us to reduce the performance demands on the DSP engine. The lower resolution, although with processing frame rates of up to 80fps, still allows many common applications for visual sensors such as object detection, fall detection, motion estimation and face detection. In addition, the resolution is low enough to guarantee privacy.

  • Abnormal motion detection in a real-time Smart Camera system
    2009 3rd ACM IEEE International Conference on Distributed Smart Cameras ICDSC 2009, 2009
    Co-Authors: Mona Akbarniai Tehrani, Peter Meijer, Richard Kleihorst, Lambert Spaanenburg
    Abstract:

    This paper discusses a method for abnormal motion detection and its real-time implementation on a Smart Camera. Abnormal motion detection is a surveillance technique that only allows unfamiliar motion patterns to result in alarms. Our approach has two phases. First, normal motion is detected and the motion paths are trained, building up a model of normal behaviour. Feed-forward neural networks are here used for learning. Second, abnormal motion is detected by comparing the current observed motion to the stored model. A complete demonstration system is implemented to detect abnormal paths of persons moving in an indoor space. As platform we used a wireless Smart Camera system containing an SIMD (single-instruction multiple-data) processor for real-time detection of moving persons and an 8051 microcontroller for implementing the neural network. The 8051 also functions as Camera host to broadcast abnormal events using ZigBee to a main network system.

  • toward low latency gesture control using Smart Camera network
    Computer Vision and Pattern Recognition, 2008
    Co-Authors: Zoran Zivkovic, V Kliger, Alexander Danilin, Ben Schueler, G Arturi, Chungching Chang, Richard Kleihorst, Hamid Aghajan
    Abstract:

    Real-world gesture controlled applications are not yet widely present mainly due to strong practical constraints. As a step toward realizing a practical, affordable, low-power, real time, low-latency gesture control, we present a Smart Camera system and an algorithm for upper body pose reconstruction implemented on the system. A single-instruction multiple-data (SIMD) processor on a Smart Camera platform is used to detect person head and hands. The detected hand and head candidate positions are then transmitted to a central processor (a PC) where the data is combined and final decisions are made. Implementation of a computer vision algorithm on the SIMD Camera processor is presented. We also describe the whole wireless Smart Camera system and analyze the performance and practical issues.

  • Real-time human posture reconstruction in wireless Smart Camera networks
    Proceedings - 2008 International Conference on Information Processing in Sensor Networks IPSN 2008, 2008
    Co-Authors: Chen Wu, Hamid Aghajan, Richard Kleihorst
    Abstract:

    While providing a variety of intriguing application opportunities, a vision sensor network poses three key challenges. High computation capacity is required for early vision functions to enable real-time performance. Wireless links limit image transmission in the network due to both bandwidth and energy concerns. Last but not least, there is a lack of established vision-based fusion mechanisms when a network of Cameras is available. In this paper a distributed vision processing implementation of human pose interpretation on a wireless Smart Camera network is presented. The motivation for employing distributed processing is to both achieve real-time vision and provide scalability for developing more complex vision algorithms. The distributed processing operation includes two levels. One is that each Smart Camera processes its local vision data, achieving spatial parallelism. The other is that different functionalities of the whole line of vision processing are assigned to early vision and object-level processors, achieving functional parallelism based on the processor capabilities. Aiming for low power consumption and high image processing performance, the wireless Smart Camera is based on an SIMD (single-instruction multiple-data) video analysis processor, an 8051 micro-controller as the local host, and wireless communication through the IEEE 802.15.4 standard. The vision algorithm implements 3D human pose reconstruction. From the live image data from the sensor the Smart Camera acquires critical joints of the subject in the scene through local processing. The results obtained by multiple Smart Cameras are then transmitted through the wireless channel to a central PC where the 3D pose is recovered and demonstrated in a virtual reality gaming application. The system operates in real time with a 30 frames/sec rate.

  • real time face recognition on a Smart Camera
    Advanced Concepts for Intelligent Vision Systems, 2003
    Co-Authors: Hamed Fatemi, Richard Kleihorst, Henk Corporaal, Pieter Jonker, Den Dolech
    Abstract:

    There is a rapidly growing demand for using Cameras containing built-in intelligence for various purposes like surveillance and identication. Recently, face recognition is becoming an important application for these Cameras. Face recognition requires lots of processing performance if real-time constraints are taken into account. The purpose of this paper is to demonstrate that by tuning the application algorithms, their implementation, and using a proper multi-processor architecture, face recognition can be performed real-time, up to faces per second, using a Smart Camera not bigger than a typical surveillance Camera.

Faisal Z Qureshi - One of the best experts on this subject based on the ideXlab platform.

  • Smart Camera Networks in Virtual Reality Simulated Smart Cameras track the movement of simulated pedestrians in a simulated train station, allowing development of improved control strategies for Smart Camera networks.
    2020
    Co-Authors: Faisal Z Qureshi, Demetri Terzopoulos
    Abstract:

    This paper presents our research towards Smart Camera networks capable of carrying out advanced surveil- lance tasks with little or no human supervision. A unique centerpiece of our work is the combination of computer graphics, artificial life, and computer vision simulation tech- nologies to develop such networks and experiment with them. Specifically, we demonstrate a Smart Camera network com- prising static and active simulated video surveillance Cameras that provides extensive coverage of a large virtual public space, at rain station populated by autonomously self-animating virtual pedestrians. The realistically simulated network of Smart Cameras performs persistent visual surveillance of individual pedestrians with minimal intervention. Our innova- tive Camera control strategy naturally addresses Camera aggregation and handoff, is robust against Camera and communication failures, and requires no Camera calibration, detailed world model, or central controller.

  • Activity aware video collection to minimize resource usage in Smart Camera nodes
    2011 8th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2011
    Co-Authors: Faisal Z Qureshi
    Abstract:

    We envision future video sensor networks comprising tether-less Smart Camera nodes capable of supporting a variety of applications, ranging from video surveillance to traffic management, Smart environments to ecological monitoring, etc. A key difference between video sensor networks and traditional multi-Camera systems is that the later typically are not concerned with power, storage, and bandwidth usage. Power requirements, especially, must be considered when designing tether-less Smart Camera networks, since the operational life of a Camera node is closely tied to the available power. Video capture and processing performed on a Camera node and the communication between nodes needed to carry out collaborative sensing tasks impact power usage of these Camera nodes. Therefore, one must devise strategies to minimize video capture and processing at each node and communication between nodes in order to reduce power consumption at each node, thereby increasing the operational life of a video sensor network.

  • Smart Camera networks in virtual reality
    International Conference on Distributed Smart Cameras, 2007
    Co-Authors: Faisal Z Qureshi, Demetri Terzopoulos
    Abstract:

    We present Smart Camera network research in the context of a unique new synthesis of advanced computer graphics and vision simulation technologies. We design and experiment with simulated Camera networks within visually and behaviorally realistic virtual environments. Specifically, we demonstrate a Smart Camera network comprising static and active simulated video surveillance Cameras that provides perceptive coverage of a large virtual public space, a train station populated by autonomously self-animating virtual pedestrians. In the context of human surveillance, we propose a Camera network control strategy that enables a collection of Smart Cameras to provide perceptive scene coverage and perform persistent surveillance with minimal intervention. Our novel control strategy naturally addresses Camera aggregation and Camera handoff, it does not require Camera calibration, a detailed world model, or a central controller, and it is robust against Camera failures and communication.

  • ICDSC - Smart Camera Networks in Virtual Reality
    2007 First ACM IEEE International Conference on Distributed Smart Cameras, 2007
    Co-Authors: Faisal Z Qureshi, Demetri Terzopoulos
    Abstract:

    We present Smart Camera network research in the context of a unique new synthesis of advanced computer graphics and vision simulation technologies. We design and experiment with simulated Camera networks within visually and behaviorally realistic virtual environments. Specifically, we demonstrate a Smart Camera network comprising static and active simulated video surveillance Cameras that provides perceptive coverage of a large virtual public space, a train station populated by autonomously self-animating virtual pedestrians. In the context of human surveillance, we propose a Camera network control strategy that enables a collection of Smart Cameras to provide perceptive scene coverage and perform persistent surveillance with minimal intervention. Our novel control strategy naturally addresses Camera aggregation and Camera handoff, it does not require Camera calibration, a detailed world model, or a central controller, and it is robust against Camera failures and communication.

Demetri Terzopoulos - One of the best experts on this subject based on the ideXlab platform.

  • Smart Camera Networks in Virtual Reality Simulated Smart Cameras track the movement of simulated pedestrians in a simulated train station, allowing development of improved control strategies for Smart Camera networks.
    2020
    Co-Authors: Faisal Z Qureshi, Demetri Terzopoulos
    Abstract:

    This paper presents our research towards Smart Camera networks capable of carrying out advanced surveil- lance tasks with little or no human supervision. A unique centerpiece of our work is the combination of computer graphics, artificial life, and computer vision simulation tech- nologies to develop such networks and experiment with them. Specifically, we demonstrate a Smart Camera network com- prising static and active simulated video surveillance Cameras that provides extensive coverage of a large virtual public space, at rain station populated by autonomously self-animating virtual pedestrians. The realistically simulated network of Smart Cameras performs persistent visual surveillance of individual pedestrians with minimal intervention. Our innova- tive Camera control strategy naturally addresses Camera aggregation and handoff, is robust against Camera and communication failures, and requires no Camera calibration, detailed world model, or central controller.

  • Smart Camera networks in virtual reality
    Proceedings of the IEEE, 2008
    Co-Authors: Faisal Qureshi, Demetri Terzopoulos
    Abstract:

    This paper presents our research towards Smart Camera networks capable of carrying out advanced surveillance tasks with little or no human supervision. A unique centerpiece of our work is the combination of computer graphics, artificial life, and computer vision simulation technologies to develop such networks and experiment with them. Specifically, we demonstrate a Smart Camera network comprising static and active simulated video surveillance Cameras that provides extensive coverage of a large virtual public space, a train station populated by autonomously self-animating virtual pedestrians. The realistically simulated network of Smart Cameras performs persistent visual surveillance of individual pedestrians with minimal intervention. Our innovative Camera control strategy naturally addresses Camera aggregation and handoff, is robust against Camera and communication failures, and requires no Camera calibration, detailed world model, or central controller.

  • Smart Camera networks in virtual reality
    International Conference on Distributed Smart Cameras, 2007
    Co-Authors: Faisal Z Qureshi, Demetri Terzopoulos
    Abstract:

    We present Smart Camera network research in the context of a unique new synthesis of advanced computer graphics and vision simulation technologies. We design and experiment with simulated Camera networks within visually and behaviorally realistic virtual environments. Specifically, we demonstrate a Smart Camera network comprising static and active simulated video surveillance Cameras that provides perceptive coverage of a large virtual public space, a train station populated by autonomously self-animating virtual pedestrians. In the context of human surveillance, we propose a Camera network control strategy that enables a collection of Smart Cameras to provide perceptive scene coverage and perform persistent surveillance with minimal intervention. Our novel control strategy naturally addresses Camera aggregation and Camera handoff, it does not require Camera calibration, a detailed world model, or a central controller, and it is robust against Camera failures and communication.

  • ICDSC - Smart Camera Networks in Virtual Reality
    2007 First ACM IEEE International Conference on Distributed Smart Cameras, 2007
    Co-Authors: Faisal Z Qureshi, Demetri Terzopoulos
    Abstract:

    We present Smart Camera network research in the context of a unique new synthesis of advanced computer graphics and vision simulation technologies. We design and experiment with simulated Camera networks within visually and behaviorally realistic virtual environments. Specifically, we demonstrate a Smart Camera network comprising static and active simulated video surveillance Cameras that provides perceptive coverage of a large virtual public space, a train station populated by autonomously self-animating virtual pedestrians. In the context of human surveillance, we propose a Camera network control strategy that enables a collection of Smart Cameras to provide perceptive scene coverage and perform persistent surveillance with minimal intervention. Our novel control strategy naturally addresses Camera aggregation and Camera handoff, it does not require Camera calibration, a detailed world model, or a central controller, and it is robust against Camera failures and communication.

Beate Rinner - One of the best experts on this subject based on the ideXlab platform.

  • Multi-Camera Networks - Toward Pervasive Smart Camera Networks
    Multi-Camera Networks, 2020
    Co-Authors: Beate Rinner, Wayne Wolf
    Abstract:

    Abstract Smart Camera networks are real-time distributed embedded systems that perform computer vision using multiple Cameras. This new approach has emerged thanks to a confluence of simultaneous advances in four key disciplines: computer vision, image sensors, embedded computing, and sensor networks. In this chapter, we briefly review and classify Smart Camera platforms and networks into single Smart Cameras, distributed Smart Camera systems, and wireless Smart Camera networks. We elaborate the vision of pervasive Smart Camera networks and identify major research challenges. As the technology for these networks advances, we expect to see many new applications open up—transforming traditional multi-Camera systems into pervasive Smart Camera networks.

  • CamSim: A distributed Smart Camera network simulator
    Proceedings - IEEE 7th International Conference on Self-Adaptation and Self-Organizing Systems Workshops SASOW 2013, 2014
    Co-Authors: Lukas Esterle, Horatio Caine, Peter R Lewis, Xin Yao, Beate Rinner
    Abstract:

    Smart Cameras allow pre-processing of video data on the Camera instead of sending it to a remote server for further analysis. Having a network of Smart Cameras allows various vision tasks to be processed in a distributed fashion. While Cameras may have different tasks, we concentrate on distributed tracking in Smart Camera networks. This application introduces various highly interesting problems. Firstly, how can conflicting goals be satisfied such as Cameras in the network try to track objects while also trying to keep communication overhead low? Secondly, how can Cameras in the network self adapt in response to the behavior of objects and changes in scenarios, to ensure continued efficient performance? Thirdly, how can Cameras organise themselves to improve the overall network's performance and efficiency? This paper presents a simulation environment, called CamSim, allowing distributed self-adaptation and self-organisation algorithms to be tested, without setting up a physical Smart Camera network. The simulation tool is written in Java and hence allows high portability between different operating systems. Relaxing various problems of computer vision and network communication enables a focus on implementing and testing new self-adaptation and self-organisation algorithms for Cameras to use. © 2013 IEEE.

  • SASO Workshops - CamSim: A Distributed Smart Camera Network Simulator
    2013 IEEE 7th International Conference on Self-Adaptation and Self-Organizing Systems Workshops, 2013
    Co-Authors: Lukas Esterle, Horatio Caine, Peter R Lewis, Beate Rinner
    Abstract:

    Smart Cameras allow pre-processing of video data on the Camera instead of sending it to a remote server for further analysis. Having a network of Smart Cameras allows various vision tasks to be processed in a distributed fashion. While Cameras may have different tasks, we concentrate on distributed tracking in Smart Camera networks. This application introduces various highly interesting problems. Firstly, how can conflicting goals be satisfied such as Cameras in the network try to track objects while also trying to keep communication overhead low? Secondly, how can Cameras in the network self adapt in response to the behavior of objects and changes in scenarios, to ensure continued efficient performance? Thirdly, how can Cameras organise themselves to improve the overall network's performance and efficiency? This paper presents a simulation environment, called CamSim, allowing distributed self-adaptation and self-organisation algorithms to be tested, without setting up a physical Smart Camera network. The simulation tool is written in Java and hence allows high portability between different operating systems. Relaxing various problems of computer vision and network communication enables a focus on implementing and testing new self-adaptation and self-organisation algorithms for Cameras to use.

  • Resource-aware configuration in Smart Camera networks
    IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2012
    Co-Authors: Beate Rinner, Lukas Esterle, Bernhard Dieber, Peter R Lewis, Xin Yao
    Abstract:

    A recent trend in Smart Camera networks is that they are able to modify the functionality during runtime to better reflect changes in the observed scenes and in the specified monitoring tasks. In this paper we focus on different configuration methods for such networks. A configuration is given by three components: (i) a description of the Camera nodes, (ii) a specification of the area of interest by means of observation points and the associated monitoring activities, and (iii) a description of the analysis tasks. We introduce centralized, distributed and proprioceptive configuration methods and compare their properties and performance.

  • Toward Pervasive Smart Camera Networks
    Multi-Camera Networks, 2009
    Co-Authors: Beate Rinner, Wayne Wolf
    Abstract:

    Abstract Smart Camera networks are real-time distributed embedded systems that perform computer vision using multiple Cameras. This new approach has emerged thanks to a confluence of simultaneous advances in four key disciplines: computer vision, image sensors, embedded computing, and sensor networks. In this chapter, we briefly review and classify Smart Camera platforms and networks into single Smart Cameras, distributed Smart Camera systems, and wireless Smart Camera networks. We elaborate the vision of pervasive Smart Camera networks and identify major research challenges. As the technology for these networks advances, we expect to see many new applications open up—transforming traditional multi-Camera systems into pervasive Smart Camera networks.

Den Dolech - One of the best experts on this subject based on the ideXlab platform.

  • real time face recognition on a Smart Camera
    Advanced Concepts for Intelligent Vision Systems, 2003
    Co-Authors: Hamed Fatemi, Richard Kleihorst, Henk Corporaal, Pieter Jonker, Den Dolech
    Abstract:

    There is a rapidly growing demand for using Cameras containing built-in intelligence for various purposes like surveillance and identication. Recently, face recognition is becoming an important application for these Cameras. Face recognition requires lots of processing performance if real-time constraints are taken into account. The purpose of this paper is to demonstrate that by tuning the application algorithms, their implementation, and using a proper multi-processor architecture, face recognition can be performed real-time, up to faces per second, using a Smart Camera not bigger than a typical surveillance Camera.