Video Cameras

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Carlos H. Caldas - One of the best experts on this subject based on the ideXlab platform.

  • Automated object identification using optical Video Cameras on construction sites
    Computer-aided Civil and Infrastructure Engineering, 2010
    Co-Authors: Seokho Chi, Carlos H. Caldas
    Abstract:

    Abstract: Visual recording devices such as Video Cameras, CCTVs, or webcams have been broadly used to facilitate work progress or safety monitoring on construction sites. Without human intervention, however, both real-time reasoning about captured scenes and interpretation of recorded images are challenging tasks. This article presents an exploratory method for automated object identification using standard Video Cameras on construction sites. The proposed method supports real-time detection and classification of mobile heavy equipment and workers. The background subtraction algorithm extracts motion pixels from an image sequence, the pixels are then grouped into regions to represent moving objects, and finally the regions are identified as a certain object using classifiers. For evaluating the method, the formulated computer-aided process was implemented on actual construction sites, and promising results were obtained. This article is expected to contribute to future applications of automated monitoring systems of work zone safety or productivity.

Schwalenberg Katrin - One of the best experts on this subject based on the ideXlab platform.

  • Methane sensor (METS) measurements at station HE537_35-1
    PANGAEA, 2020
    Co-Authors: Müller Hendrik, Heeschen, Katja U, Schwalenberg Katrin
    Abstract:

    Deep-towed methane and CTD data measured during RV Heincke expedition HE537 in the German 'Entenschnabel' area, North Sea. The methane sensor (METS) from the company Franatech was mounted at a frame (together with CTD, Video Cameras, forward looking sonar, altimeter and USBL transponder), which was towed 0.5 - 2 m above the sea floor at tow speeds of only 0.5 - 1 knots

  • Methane sensor (METS) measurements at station HE537_41-1
    PANGAEA, 2020
    Co-Authors: Müller Hendrik, Heeschen, Katja U, Schwalenberg Katrin
    Abstract:

    Deep-towed methane and CTD data measured during RV Heincke expedition HE537 in the German 'Entenschnabel' area, North Sea. The methane sensor (METS) from the company Franatech was mounted at a frame (together with CTD, Video Cameras, forward looking sonar, altimeter and USBL transponder), which was towed 0.5 - 2 m above the sea floor at tow speeds of only 0.5 - 1 knots

  • Methane sensor (METS) measurements at station HE537_51-1
    PANGAEA, 2020
    Co-Authors: Müller Hendrik, Heeschen, Katja U, Schwalenberg Katrin
    Abstract:

    Deep-towed methane and CTD data measured during RV Heincke expedition HE537 in the German 'Entenschnabel' area, North Sea. The methane sensor (METS) from the company Franatech was mounted at a frame (together with CTD, Video Cameras, forward looking sonar, altimeter and USBL transponder), which was towed 0.5 - 2 m above the sea floor at tow speeds of only 0.5 - 1 knots

  • Methane sensor (METS) measurements at station HE537_82-1
    PANGAEA, 2020
    Co-Authors: Müller Hendrik, Heeschen, Katja U, Schwalenberg Katrin
    Abstract:

    Deep-towed methane and CTD data measured during RV Heincke expedition HE537 in the German 'Entenschnabel' area, North Sea. The methane sensor (METS) from the company Franatech was mounted at a frame (together with CTD, Video Cameras, forward looking sonar, altimeter and USBL transponder), which was towed 0.5 - 2 m above the sea floor at tow speeds of only 0.5 - 1 knots

  • Methane sensor (METS) measurements at station HE537_34-1
    PANGAEA, 2020
    Co-Authors: Müller Hendrik, Heeschen, Katja U, Schwalenberg Katrin
    Abstract:

    Deep-towed methane and CTD data measured during RV Heincke expedition HE537 in the German 'Entenschnabel' area, North Sea. The methane sensor (METS) from the company Franatech was mounted at a frame (together with CTD, Video Cameras, forward looking sonar, altimeter and USBL transponder), which was towed 0.5 - 2 m above the sea floor at tow speeds of only 0.5 - 1 knots

Ryosuke Shibasaki - One of the best experts on this subject based on the ideXlab platform.

  • sensing an intersection using a network of laser scanners and Video Cameras
    IEEE Intelligent Transportation Systems Magazine, 2009
    Co-Authors: Huijing Zhao, Jinshi Cui, Hongbin Zha, K Katabira, Xiaowei Shao, Ryosuke Shibasaki
    Abstract:

    In this research, a novel system for monitoring an intersection using a network of single-row laser range scanners (subsequently abbreviated as "laser scanner") and Video Cameras is proposed. Laser scanners are set on the road side to profile an intersection horizontally from different viewpoints. The contour points of moving objects are captured at a certain horizontal plane with a high scanning rate (e.g., 37 Hz). A laser-based processing algorithm is developed, thus the moving objects entered the intersection are detected and tracked to estimate their state parameters, such as: location, speed, and direction at each time instance. In addition, laser data and processing results are forwarded to an associated Video camera, so that a visualization as well as fusion-based processing can be achieved. An experiment in central Beijing is presented, demonstrating that a large quantity of physical dimension and detailed traffic data can be obtained through such a system.

  • An efficient extrinsic calibration of a multiple laser scanners and Cameras' sensor system on a mobile platform
    2007 IEEE Intelligent Vehicles Symposium, 2007
    Co-Authors: Huijing Zhao, Yuzhong Chen, Ryosuke Shibasaki
    Abstract:

    This work is motivated by a development of a portable and low-cost solution for road mapping in downtown area using a number of laser scanners and Video Cameras that are mounted on an intelligent vehicle. Sensors on the vehicle platform are considered to be removable, so that extrinsic calibrations are required after each sensors' setting up. Extrinsic calibration might always happen at or near measuremental sites, so that the facilities such as specially marked large environment could not be supposed as a given in the process. In this research, we present a practical method for extrinsic calibration of multiple laser scanners and Video Cameras that are mounted on a vehicle platform. Referring to a fiducial coordinate system on vehicle platform, a constraint between the data of a laser scanner and of a Video camera is established. It is solved in an iterative way to find a best solution from the laser scanner and from the Video camera to the fiducial coordinate system. On the other hand, all laser scanners and Video Cameras are calibrated for each laser scanner and Video camera pair that has common in feature points in a sequential way. An experiment is conducted using the data measured on a normal street road. Calibration results are demonstrated by fusing the sensor data into a global coordinate system.

  • bio feedback control analysis of postural stability using ccd Video Cameras and a force plate sensor synchronized system
    Systems Man and Cybernetics, 1998
    Co-Authors: Masako Tsuruoka, Ryosuke Shibasaki, Shunji Murai
    Abstract:

    The developed system is composed of two CCD Video Cameras and a force plate sensor, which are controlled simultaneously by a personal computer. By this system, time-series data of 3D coordinates of joints and each foot's pressure enable to provide efficient bio-feedback control analysis of postural stability utilizing autoregressive modeling. After therapy, in the case of 80% of patients, the power spectrum of the body's center of gravity fluctuations became inversely proportional to frequency, and impulse response recovered.

  • biomechanical and mathematical analysis of human movement in medical rehabilitation science using time series data from two Video Cameras and force plate sensor
    Spatial Information from Digital Photogrammetry and Computer Vision: ISPRS Commission III Symposium, 1994
    Co-Authors: Masako Tsuruoka, Shunji Murai, Ryosuke Shibasaki, Eiji Mori, Takao Wada, Masahiro Kurita, Makoto Iritani, Yoshikatsu Kuroki
    Abstract:

    In medical rehabilitation science, quantitative understanding of patient movement in 3-D space is very important. The patient with any joint disorder will experience its influence on other body parts in daily movement. The alignment of joints in movement is able to improve under medical therapy process. In this study, the newly developed system is composed of two non- metri CCD Video Cameras and a force plate sensor, which are controlled simultaneously by a personal computer. By this system time-series digital data from 3-D image photogrammetry, each foot pressure and its center position, is able to provide efficient information for biomechanical and mathematical analysis of human movement. Each specific and common points are indicated in any patient movement. This study suggests more various, quantitative understanding in medical rehabilitation science.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Pierre A Pistorius - One of the best experts on this subject based on the ideXlab platform.

  • bird borne Video Cameras show that seabird movement patterns relate to previously unrevealed proximate environment not prey
    PLOS ONE, 2014
    Co-Authors: Yann Tremblay, Andrea Thiebault, Ralf H E Mullers, Pierre A Pistorius
    Abstract:

    The study of ecological and behavioral processes has been revolutionized in the last two decades with the rapid development of biologging-science. Recently, using image-capturing devices, some pilot studies demonstrated the potential of understanding marine vertebrate movement patterns in relation to their proximate, as opposed to remote sensed environmental contexts. Here, using miniaturized Video Cameras and GPS tracking recorders simultaneously, we show for the first time that information on the immediate visual surroundings of a foraging seabird, the Cape gannet, is fundamental in understanding the origins of its movement patterns. We found that movement patterns were related to specific stimuli which were mostly other predators such as gannets, dolphins or fishing boats. Contrary to a widely accepted idea, our data suggest that foraging seabirds are not directly looking for prey. Instead, they search for indicators of the presence of prey, the latter being targeted at the very last moment and at a very small scale. We demonstrate that movement patterns of foraging seabirds can be heavily driven by processes unobservable with conventional methodology. Except perhaps for large scale processes, local-enhancement seems to be the only ruling mechanism; this has profounds implications for ecosystem-based management of marine areas.

Seokho Chi - One of the best experts on this subject based on the ideXlab platform.

  • Automated object identification using optical Video Cameras on construction sites
    Computer-aided Civil and Infrastructure Engineering, 2010
    Co-Authors: Seokho Chi, Carlos H. Caldas
    Abstract:

    Abstract: Visual recording devices such as Video Cameras, CCTVs, or webcams have been broadly used to facilitate work progress or safety monitoring on construction sites. Without human intervention, however, both real-time reasoning about captured scenes and interpretation of recorded images are challenging tasks. This article presents an exploratory method for automated object identification using standard Video Cameras on construction sites. The proposed method supports real-time detection and classification of mobile heavy equipment and workers. The background subtraction algorithm extracts motion pixels from an image sequence, the pixels are then grouped into regions to represent moving objects, and finally the regions are identified as a certain object using classifiers. For evaluating the method, the formulated computer-aided process was implemented on actual construction sites, and promising results were obtained. This article is expected to contribute to future applications of automated monitoring systems of work zone safety or productivity.