Base Tangent - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Base Tangent

The Experts below are selected from a list of 48 Experts worldwide ranked by ideXlab platform

Murali Subbarao – 1st expert on this subject based on the ideXlab platform

  • Automatic 3D model reconstruction Based on novel pose estimation and integration techniques
    Image and Vision Computing, 2004
    Co-Authors: Soon-yong Park, Murali Subbarao

    Abstract:

    Abstract An automatic three-dimensional (3D) model reconstruction technique is presented to acquire complete and closed 3D models of real objects. The technique is Based on novel approaches to pose estimation and integration. Two different poses of an object are used because a single pose often hides some surfaces from a range sensor. A second pose is used to expose such surfaces to the sensor. Two partial 3D models are reconstructed for two different poses of the object using a multi-view 3D modeling technique. The two 3D models are then registered in two steps—coarse registration, and its refinement. Coarse registration is facilitated by a novel pose estimation technique, which estimates a rigid transformation between two models. The pose is estimated by matching a stable Tangent plane (STP) of each pose-model with the Base Tangent plane, which is invariant for a vision system. We employ geometric constraints to find the STP. After registration refinement, two models are integrated to a complete 3D model Based on voxel classification defined in multi-view integration. Texture mapping is done to obtain a photo-realistic reconstruction of the object. Reconstruction results and error analysis are presented for several real objects.

  • Automatic 3D model reconstruction Based on novel pose estimation and integration techniques
    Image and Vision Computing, 2004
    Co-Authors: Soon-yong Park, Murali Subbarao

    Abstract:

    An automatic three-dimensional (3D) model reconstruction technique is presented to acquire complete and closed 3D models of real objects. The technique is Based on novel approaches to pose estimation and integration. Two different poses of an object are used because a single pose often hides some surfaces from a range sensor. A second pose is used to expose such surfaces to the sensor. Two partial 3D models are reconstructed for two different poses of the object using a multi-view 3D modeling technique. The two 3D models are then registered in two steps – coarse registration, and its refinement. Coarse registration is facilitated by a novel pose estimation technique, which estimates a rigid transformation between two models. The pose is estimated by matching a stable Tangent plane (STP) of each pose-model with the Base Tangent plane, which is invariant for a vision system. We employ geometric constraints to find the STP. After registration refinement, two models are integrated to a complete 3D model Based on voxel classification defined in multi-view integration. Texture mapping is done to obtain a photo-realistic reconstruction of the object. Reconstruction results and error analysis are presented for several real objects. © 2004 Elsevier B.V. All rights reserved.

  • Stereo vision and range image techniques for generating 3d computer models of real objects
    , 2003
    Co-Authors: Murali Subbarao, Soon-yong Park

    Abstract:

    One topic of research interest today in three-dimensional (3D) model reconstruction is the generation of a complete and photorealistic 3D model from multiple views of an object. This dissertation addresses the problem of generating 3D computer models of real-world objects. We present Stereo Vision systems and Computer Vision techniques for complete 3D model reconstruction through a sequence of steps: (1) Multi-view range image acquisition (2) Registration and integration of multi-view range images (3) Pose estimation of 31) models (4) Integration of two-pose 3D models and (5) Photorealistic texture mapping.
    We present two stereo vision systems to obtain multi-view range images and photometric textures of an object. Each system consists of a stereo camera and a motion control stage to change the view of the object. Calibrations of both stereo cameras and motion control stages are presented. Range images obtained from multiple views of an object are registered to a common coordinate system through the calibrations of the vision systems. In order to refine the registration of multi-view range images, we introduce a novel registration refinement technique. The proposed technique combines Point-to-Tangent Plane and Point-to-Projection approaches for accurate and fast refinement.
    In order to merge registered range images, we present two different integration techniques. A mesh-Based technique integrates range images through merging of multiple contours on a cross section of a volumetric representation of the object. A slice-by-slice integration on all cross sections reconstructs a complete 3D model represented by a set of closed contours. We also present a volumetric multi-view integration technique. In order to remove erroneous points outside of the visual hull of an object, Shape-from-Silhouettes technique is combined. A 3D grid of voxels is classified into several sub-regions Based on the signed-distances of a voxel to overlapping range images. The iso-surface of the object is reconstructed by a class-dependent technique of averaging the signed distances. Marching Cubes algorithm then converts the iso-surface representation of the object to a 3D mesh model.
    For many real objects, using a single pose yields only a partial 3D model because some surfaces of the object remain hidden from a range sensor due to occlusions or concavities. In order to obtain a complete and closed 3D model, we generate two 3D models of the object, register and integrate the 3D models into a single 3D model. By placing the object in different suitable poses and sensing the visible surfaces, we reconstruct two partial 3D models. We then merge the partial 3D models by novel pose registration and integration techniques. Registration of two pose models consists of two steps, coarse registration, and its refinement. A pose estimation technique between two 3D models is presented to determine coarse registration parameters. The pose estimation technique finds a stable Tangent plane (STP) on a 3D model which can be transformed to the Base Tangent plane (BTP) of the other model and vice versa. After pose estimation, the two pose models are integrated to obtain a complete 3D model through a volumetric pose integration technique. The integration technique merges two iso-surfaces of the corresponding partial 3D models. Texture mapping finally generates photorealistic 3D models of real-world objects.

Soon-yong Park – 2nd expert on this subject based on the ideXlab platform

  • Automatic 3D model reconstruction Based on novel pose estimation and integration techniques
    Image and Vision Computing, 2004
    Co-Authors: Soon-yong Park, Murali Subbarao

    Abstract:

    Abstract An automatic three-dimensional (3D) model reconstruction technique is presented to acquire complete and closed 3D models of real objects. The technique is Based on novel approaches to pose estimation and integration. Two different poses of an object are used because a single pose often hides some surfaces from a range sensor. A second pose is used to expose such surfaces to the sensor. Two partial 3D models are reconstructed for two different poses of the object using a multi-view 3D modeling technique. The two 3D models are then registered in two steps—coarse registration, and its refinement. Coarse registration is facilitated by a novel pose estimation technique, which estimates a rigid transformation between two models. The pose is estimated by matching a stable Tangent plane (STP) of each pose-model with the Base Tangent plane, which is invariant for a vision system. We employ geometric constraints to find the STP. After registration refinement, two models are integrated to a complete 3D model Based on voxel classification defined in multi-view integration. Texture mapping is done to obtain a photo-realistic reconstruction of the object. Reconstruction results and error analysis are presented for several real objects.

  • Automatic 3D model reconstruction Based on novel pose estimation and integration techniques
    Image and Vision Computing, 2004
    Co-Authors: Soon-yong Park, Murali Subbarao

    Abstract:

    An automatic three-dimensional (3D) model reconstruction technique is presented to acquire complete and closed 3D models of real objects. The technique is Based on novel approaches to pose estimation and integration. Two different poses of an object are used because a single pose often hides some surfaces from a range sensor. A second pose is used to expose such surfaces to the sensor. Two partial 3D models are reconstructed for two different poses of the object using a multi-view 3D modeling technique. The two 3D models are then registered in two steps – coarse registration, and its refinement. Coarse registration is facilitated by a novel pose estimation technique, which estimates a rigid transformation between two models. The pose is estimated by matching a stable Tangent plane (STP) of each pose-model with the Base Tangent plane, which is invariant for a vision system. We employ geometric constraints to find the STP. After registration refinement, two models are integrated to a complete 3D model Based on voxel classification defined in multi-view integration. Texture mapping is done to obtain a photo-realistic reconstruction of the object. Reconstruction results and error analysis are presented for several real objects. © 2004 Elsevier B.V. All rights reserved.

  • Stereo vision and range image techniques for generating 3d computer models of real objects
    , 2003
    Co-Authors: Murali Subbarao, Soon-yong Park

    Abstract:

    One topic of research interest today in three-dimensional (3D) model reconstruction is the generation of a complete and photorealistic 3D model from multiple views of an object. This dissertation addresses the problem of generating 3D computer models of real-world objects. We present Stereo Vision systems and Computer Vision techniques for complete 3D model reconstruction through a sequence of steps: (1) Multi-view range image acquisition (2) Registration and integration of multi-view range images (3) Pose estimation of 31) models (4) Integration of two-pose 3D models and (5) Photorealistic texture mapping.
    We present two stereo vision systems to obtain multi-view range images and photometric textures of an object. Each system consists of a stereo camera and a motion control stage to change the view of the object. Calibrations of both stereo cameras and motion control stages are presented. Range images obtained from multiple views of an object are registered to a common coordinate system through the calibrations of the vision systems. In order to refine the registration of multi-view range images, we introduce a novel registration refinement technique. The proposed technique combines Point-to-Tangent Plane and Point-to-Projection approaches for accurate and fast refinement.
    In order to merge registered range images, we present two different integration techniques. A mesh-Based technique integrates range images through merging of multiple contours on a cross section of a volumetric representation of the object. A slice-by-slice integration on all cross sections reconstructs a complete 3D model represented by a set of closed contours. We also present a volumetric multi-view integration technique. In order to remove erroneous points outside of the visual hull of an object, Shape-from-Silhouettes technique is combined. A 3D grid of voxels is classified into several sub-regions Based on the signed-distances of a voxel to overlapping range images. The iso-surface of the object is reconstructed by a class-dependent technique of averaging the signed distances. Marching Cubes algorithm then converts the iso-surface representation of the object to a 3D mesh model.
    For many real objects, using a single pose yields only a partial 3D model because some surfaces of the object remain hidden from a range sensor due to occlusions or concavities. In order to obtain a complete and closed 3D model, we generate two 3D models of the object, register and integrate the 3D models into a single 3D model. By placing the object in different suitable poses and sensing the visible surfaces, we reconstruct two partial 3D models. We then merge the partial 3D models by novel pose registration and integration techniques. Registration of two pose models consists of two steps, coarse registration, and its refinement. A pose estimation technique between two 3D models is presented to determine coarse registration parameters. The pose estimation technique finds a stable Tangent plane (STP) on a 3D model which can be transformed to the Base Tangent plane (BTP) of the other model and vice versa. After pose estimation, the two pose models are integrated to obtain a complete 3D model through a volumetric pose integration technique. The integration technique merges two iso-surfaces of the corresponding partial 3D models. Texture mapping finally generates photorealistic 3D models of real-world objects.

Mariano Pernetti – 3rd expert on this subject based on the ideXlab platform

  • Perceptual Measures to Influence Operating Speeds and Reduce Crashes at Rural Intersections: Driving Simulator Experiment
    Transportation Research Record, 2010
    Co-Authors: Alfonso Montella, Massimo Aria, Antonio D’ambrosio, Francesco Galante, Filomena Mauriello, Mariano Pernetti

    Abstract:

    The aim of this paper is to investigate, by means of a dynamic driving simulator experiment, the behavior of road users at rural intersections in relation to perceptual measures designed for increasing hazard detection. In the experiment 10 configurations of Tangents were tested: Alt1, Base Tangent; Alt2, four-leg Base intersection; Alt3, intersection with reduced sight distance; and Alt4 through Alt10, intersections with perceptual treatments. The Virtual Environment for Road Safety high-fidelity dynamic-driving simulator, operating at the Technology Environment Safety Transport Road Safety Laboratory located in Naples, Italy, was used. Analysis of the results used two approaches: (a) explorative description of data by cluster analysis and (b) inferential procedures about population using statistical tests. Results showed that the speed behavior in the Tangents was significantly affected by the presence of the intersections and by the perceptual treatments. Intersections without perceptual treatments sig…

  • Perceptual Measures to Influence Operating Speeds and Reduce Crashes at Rural Intersections: Driving Simulator Experiment
    Transportation Research Record: Journal of the Transportation Research Board, 2010
    Co-Authors: Alfonso Montella, Massimo Aria, Francesco Galante, Filomena Mauriello, Antonio D'ambrosio, Mariano Pernetti

    Abstract:

    The aim of this paper is to investigate, by means of a dynamic driving simulator experiment, the behavior of road users at rural intersections in relation to perceptual measures designed for increasing hazard detection. In the experiment 10 configurations of Tangents were tested: Alt1, Base Tangent; Alt2, four-leg Base intersection; Alt3, intersection with reduced sight distance; and Alt4 through Alt10, intersections with perceptual treatments. The Virtual Environment for Road Safety high-fidelity dynamic-driving simulator, operating at the Technology Environment Safety Transport Road Safety Laboratory located in Naples, Italy, was used. Analysis of the results used two approaches: (a) explorative description of data by cluster analysis and (b) inferential procedures about population using statistical tests. Results showed that the speed behavior in the Tangents was significantly affected by the presence of the intersections and by the perceptual treatments. Intersections without perceptual treatments significantly affected driver speeds in the 250 m preceding the intersection. Perceptual treatments helped the driver to detect the intersection earlier and to slow down. Dragon teeth markings, colored intersection area, and raised median island performed better than the other perceptual treatments. They produced significant average speed reduction in the 150 m preceding the intersection ranging between 16 km/h and 23 km/h.\ud
    Study results support real-world implementation of perceptual measures in rural intersections because they are low-cost, fast implementation measures with a high potential to be cost-effective