Scene Point

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 843 Experts worldwide ranked by ideXlab platform

S.k. Nayar - One of the best experts on this subject based on the ideXlab platform.

  • 8 – Multidimensional fusion by image mosaics
    Image Fusion, 2020
    Co-Authors: Y.y. Schechner, S.k. Nayar
    Abstract:

    Image mosaicing creates a wide field of view image of a Scene by fusing data from narrow field images. As a camera moves, each Scene Point is typically sensed multiple times during frame acquisition. Here we describe generalised mosaicing, which is an approach that enhances this process. An optical component with spatially varying properties is rigidly attached to the camera. This way, the multiple measurements corresponding to any Scene Point are made under different optical settings. Fusing the data captured by the multiple frames yields an image mosaic that includes additional information about the Scene. This information can come in the form of extended dynamic range, high spectral quality, polarisation sensitivity or extended depth of field (focus). For instance, suppose the state of best focus in the camera is spatially varying. This can be achieved by placing a transparent dielectric on the detector array. As the camera rigidly moves to enlarge the field of view, it senses each Scene Point multiple times, each time in a different focus setting. This yields a wide depth of field, wide field of view image, and a rough depth map of the Scene.

  • Generalized mosaicing: polarization panorama
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005
    Co-Authors: Y.y. Schechner, S.k. Nayar
    Abstract:

    We present an approach to image the polarization state of object Points in a wide field of view, while enhancing the radiometric dynamic range of imaging systems by generalizing image mosaicing. The approach is biologically inspired, as it emulates spatially varying polarization sensitivity of some animals. In our method, a spatially varying polarization and attenuation filter is rigidly attached to a camera. As the system moves, it senses each Scene Point multiple times, each time filtering it through a different filter polarizing angle, polarizance, and transmittance. Polarization is an additional dimension of the generalized mosaicing paradigm, which has recently yielded high dynamic range images and multispectral images in a wide field of view using other kinds of filters. The image acquisition is as easy as in traditional image mosaics. The computational algorithm can easily handle nonideal polarization filters (partial polarizers), variable exposures, and saturation in a single framework. The resulting mosaic represents the polarization state at each Scene Point. Using data acquired by this method, we demonstrate attenuation and enhancement of specular reflections and semi reflection separation in an image mosaic.

  • Polarization mosaicing: high dynamic range and polarization imaging in a wide field of view
    Polarization Science and Remote Sensing, 2003
    Co-Authors: Y.y. Schechner, S.k. Nayar
    Abstract:

    We present an approach for imaging the polarization state of Scene Points in a wide field of view, while enhancing the radiometric dynamic range of imaging systems. This is achieved by a simple modification of image mosaicking, which is a common technique in remote sensing. In traditional image mosaics, images taken in varying directions or positions are stitched to obtain a larger image. Yet, as the camera moves, it senses each Scene Point multiple times in overlapping regions of the raw frames. We rigidly attach to the camera a fixed, spatially varying polarization and attenuation filter. This way, the camera motion-induced multiple measurements per Scene Point are taken under different optical settings. This is in contrast to the redundant measurements of traditional mosaics. Computational algorithms then analyze the data to extract polarization imaging with high dynamic range across the mosaic field of view. We developed a Maximum Likelihood method to automatically register the images, in spite of the challenging spatially varying effects. Then, we use Maximum Likelihood to handle, in a single framework, variable exposures (due to transmittance variations), saturation, and partial polarization filtering. As a by product, these results enable polarization settings of cameras to change while the camera moves, alleviating the need for camera stability. This work demonstrates the modularity of the Generalized Mosaicing approach, which we recently introduced for multispectral image mosaics. The results are useful for the wealth of polarization imaging applications, in addition to mosaicking applications, particularly remote sensing. We demonstrate experimental results obtained using a system we built.

  • Generalized Mosaicing: High Dynamic Range in a Wide Field of View
    International Journal of Computer Vision, 2003
    Co-Authors: Y.y. Schechner, S.k. Nayar
    Abstract:

    We present an approach that significantly enhances the capabilities of traditional image mosaicking. The key observation is that as a camera moves, it senses each Scene Point multiple times. We rigidly attach to the camera an optical filter with spatially varying properties, so that multiple measurements are obtained for each Scene Point under different optical settings. Fusing the data captured in the multiple images yields an image mosaic that includes additional information about the Scene. We refer to this approach as generalized mosaicing. In this paper we show that this approach can significantly extend the optical dynamic range of any given imaging system by exploiting vignetting effects. We derive the optimal vignetting configuration and implement it using an external filter with spatially varying transmittance. We also derive efficient Scene sampling conditions as well as ways to self calibrate the vignetting effects. Maximum likelihood is used for image registration and fusion. In an experiment we mounted such a filter on a standard 8-bit video camera, to obtain an image panorama with dynamic range comparable to imaging with a 16-bit camera.

  • Generalized mosaicing: wide field of view multispectral imaging
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002
    Co-Authors: Y.y. Schechner, S.k. Nayar
    Abstract:

    We present an approach to significantly enhance the spectral resolution of imaging systems by generalizing image mosaicing. A filter transmitting spatially varying spectral bands is rigidly attached to a camera. As the system moves, it senses each Scene Point multiple times, each time in a different spectral band. This is an additional dimension of the generalized mosaic paradigm, which has demonstrated yielding high radiometric dynamic range images in a wide field of view, using a spatially varying density filter. The resulting mosaic represents the spectrum at each Scene Point. The image acquisition is as easy as in traditional image mosaics. We derive an efficient Scene sampling rate, and use a registration method that accommodates the spatially varying properties of the filter. Using the data acquired by this method, we demonstrate Scene rendering under different simulated illumination spectra. We are also able to infer information about the Scene illumination. The approach was tested using a standard 8-bit black/white video camera and a fixed spatially varying spectral (interference) filter.

T. Pajdla - One of the best experts on this subject based on the ideXlab platform.

  • CVPR (2) - 3D reconstruction from 360/spl times/360 mosaics
    Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 2001
    Co-Authors: H. Bakstein, T. Pajdla
    Abstract:

    We are studying the geometry of a 360/spl times/360 mosaic image formation. A 360/spl times/360 mosaic camera model and a calibration procedure are proposed. It is shown that only one Point correspondence is needed in order to acquire epipolar rectified images. The 360/spl times/360 mosaic camera model is therefore determined by only one intrinsic parameter It is shown that the relation between coordinates estimated with different values of intrinsic 360/spl times/360 mosaic camera parameters is a scaling of all Scene Point coordinates with additional nonlinear changes in the z coordinates of the Scene Points. Experimental results verifying the reconstruction of real Scene Points are presented.

  • 3D reconstruction from 360/spl times/360 mosaics
    Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 2001
    Co-Authors: H. Bakstein, T. Pajdla
    Abstract:

    We are studying the geometry of a 360/spl times/360 mosaic image formation. A 360/spl times/360 mosaic camera model and a calibration procedure are proposed. It is shown that only one Point correspondence is needed in order to acquire epipolar rectified images. The 360/spl times/360 mosaic camera model is therefore determined by only one intrinsic parameter It is shown that the relation between coordinates estimated with different values of intrinsic 360/spl times/360 mosaic camera parameters is a scaling of all Scene Point coordinates with additional nonlinear changes in the z coordinates of the Scene Points. Experimental results verifying the reconstruction of real Scene Points are presented.

H. Bakstein - One of the best experts on this subject based on the ideXlab platform.

  • CVPR (2) - 3D reconstruction from 360/spl times/360 mosaics
    Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 2001
    Co-Authors: H. Bakstein, T. Pajdla
    Abstract:

    We are studying the geometry of a 360/spl times/360 mosaic image formation. A 360/spl times/360 mosaic camera model and a calibration procedure are proposed. It is shown that only one Point correspondence is needed in order to acquire epipolar rectified images. The 360/spl times/360 mosaic camera model is therefore determined by only one intrinsic parameter It is shown that the relation between coordinates estimated with different values of intrinsic 360/spl times/360 mosaic camera parameters is a scaling of all Scene Point coordinates with additional nonlinear changes in the z coordinates of the Scene Points. Experimental results verifying the reconstruction of real Scene Points are presented.

  • 3D reconstruction from 360/spl times/360 mosaics
    Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 2001
    Co-Authors: H. Bakstein, T. Pajdla
    Abstract:

    We are studying the geometry of a 360/spl times/360 mosaic image formation. A 360/spl times/360 mosaic camera model and a calibration procedure are proposed. It is shown that only one Point correspondence is needed in order to acquire epipolar rectified images. The 360/spl times/360 mosaic camera model is therefore determined by only one intrinsic parameter It is shown that the relation between coordinates estimated with different values of intrinsic 360/spl times/360 mosaic camera parameters is a scaling of all Scene Point coordinates with additional nonlinear changes in the z coordinates of the Scene Points. Experimental results verifying the reconstruction of real Scene Points are presented.

Y.y. Schechner - One of the best experts on this subject based on the ideXlab platform.

  • 8 – Multidimensional fusion by image mosaics
    Image Fusion, 2020
    Co-Authors: Y.y. Schechner, S.k. Nayar
    Abstract:

    Image mosaicing creates a wide field of view image of a Scene by fusing data from narrow field images. As a camera moves, each Scene Point is typically sensed multiple times during frame acquisition. Here we describe generalised mosaicing, which is an approach that enhances this process. An optical component with spatially varying properties is rigidly attached to the camera. This way, the multiple measurements corresponding to any Scene Point are made under different optical settings. Fusing the data captured by the multiple frames yields an image mosaic that includes additional information about the Scene. This information can come in the form of extended dynamic range, high spectral quality, polarisation sensitivity or extended depth of field (focus). For instance, suppose the state of best focus in the camera is spatially varying. This can be achieved by placing a transparent dielectric on the detector array. As the camera rigidly moves to enlarge the field of view, it senses each Scene Point multiple times, each time in a different focus setting. This yields a wide depth of field, wide field of view image, and a rough depth map of the Scene.

  • Generalized mosaicing: polarization panorama
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005
    Co-Authors: Y.y. Schechner, S.k. Nayar
    Abstract:

    We present an approach to image the polarization state of object Points in a wide field of view, while enhancing the radiometric dynamic range of imaging systems by generalizing image mosaicing. The approach is biologically inspired, as it emulates spatially varying polarization sensitivity of some animals. In our method, a spatially varying polarization and attenuation filter is rigidly attached to a camera. As the system moves, it senses each Scene Point multiple times, each time filtering it through a different filter polarizing angle, polarizance, and transmittance. Polarization is an additional dimension of the generalized mosaicing paradigm, which has recently yielded high dynamic range images and multispectral images in a wide field of view using other kinds of filters. The image acquisition is as easy as in traditional image mosaics. The computational algorithm can easily handle nonideal polarization filters (partial polarizers), variable exposures, and saturation in a single framework. The resulting mosaic represents the polarization state at each Scene Point. Using data acquired by this method, we demonstrate attenuation and enhancement of specular reflections and semi reflection separation in an image mosaic.

  • Polarization mosaicing: high dynamic range and polarization imaging in a wide field of view
    Polarization Science and Remote Sensing, 2003
    Co-Authors: Y.y. Schechner, S.k. Nayar
    Abstract:

    We present an approach for imaging the polarization state of Scene Points in a wide field of view, while enhancing the radiometric dynamic range of imaging systems. This is achieved by a simple modification of image mosaicking, which is a common technique in remote sensing. In traditional image mosaics, images taken in varying directions or positions are stitched to obtain a larger image. Yet, as the camera moves, it senses each Scene Point multiple times in overlapping regions of the raw frames. We rigidly attach to the camera a fixed, spatially varying polarization and attenuation filter. This way, the camera motion-induced multiple measurements per Scene Point are taken under different optical settings. This is in contrast to the redundant measurements of traditional mosaics. Computational algorithms then analyze the data to extract polarization imaging with high dynamic range across the mosaic field of view. We developed a Maximum Likelihood method to automatically register the images, in spite of the challenging spatially varying effects. Then, we use Maximum Likelihood to handle, in a single framework, variable exposures (due to transmittance variations), saturation, and partial polarization filtering. As a by product, these results enable polarization settings of cameras to change while the camera moves, alleviating the need for camera stability. This work demonstrates the modularity of the Generalized Mosaicing approach, which we recently introduced for multispectral image mosaics. The results are useful for the wealth of polarization imaging applications, in addition to mosaicking applications, particularly remote sensing. We demonstrate experimental results obtained using a system we built.

  • Generalized Mosaicing: High Dynamic Range in a Wide Field of View
    International Journal of Computer Vision, 2003
    Co-Authors: Y.y. Schechner, S.k. Nayar
    Abstract:

    We present an approach that significantly enhances the capabilities of traditional image mosaicking. The key observation is that as a camera moves, it senses each Scene Point multiple times. We rigidly attach to the camera an optical filter with spatially varying properties, so that multiple measurements are obtained for each Scene Point under different optical settings. Fusing the data captured in the multiple images yields an image mosaic that includes additional information about the Scene. We refer to this approach as generalized mosaicing. In this paper we show that this approach can significantly extend the optical dynamic range of any given imaging system by exploiting vignetting effects. We derive the optimal vignetting configuration and implement it using an external filter with spatially varying transmittance. We also derive efficient Scene sampling conditions as well as ways to self calibrate the vignetting effects. Maximum likelihood is used for image registration and fusion. In an experiment we mounted such a filter on a standard 8-bit video camera, to obtain an image panorama with dynamic range comparable to imaging with a 16-bit camera.

  • Generalized mosaicing: wide field of view multispectral imaging
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002
    Co-Authors: Y.y. Schechner, S.k. Nayar
    Abstract:

    We present an approach to significantly enhance the spectral resolution of imaging systems by generalizing image mosaicing. A filter transmitting spatially varying spectral bands is rigidly attached to a camera. As the system moves, it senses each Scene Point multiple times, each time in a different spectral band. This is an additional dimension of the generalized mosaic paradigm, which has demonstrated yielding high radiometric dynamic range images in a wide field of view, using a spatially varying density filter. The resulting mosaic represents the spectrum at each Scene Point. The image acquisition is as easy as in traditional image mosaics. We derive an efficient Scene sampling rate, and use a registration method that accommodates the spatially varying properties of the filter. Using the data acquired by this method, we demonstrate Scene rendering under different simulated illumination spectra. We are also able to infer information about the Scene illumination. The approach was tested using a standard 8-bit black/white video camera and a fixed spatially varying spectral (interference) filter.

N. Ahuja - One of the best experts on this subject based on the ideXlab platform.

  • ICPR (4) - An Omnidirectional Stereo Vision System Using a Single Camera
    18th International Conference on Pattern Recognition (ICPR'06), 2006
    Co-Authors: Sooyeong Yi, N. Ahuja
    Abstract:

    We describe a new omnidirectional stereo imaging system that uses a concave lens and a convex mirror to produce a stereo pair of images on the sensor of a conventional camera. The light incident from a Scene Point is split and directed to the camera in two parts. One part reaches camera directly after reflection from the convex mirror and forms a single-viewPoint omnidirectional image. The second part is formed by passing a subbeam of the reflected light from the mirror through a concave lens and forms a displaced single viewPoint image where the disparity depends on the depth of the Scene Point. A closed-form expression for depth is derived. Since the optical components used are simple and commercially available, the resulting system is compact and inexpensive. This, and the simplicity of the required image processing algorithms, make the proposed system attractive for real-time applications, such as autonomous navigation and object manipulation. The experimental prototype we have built is described.

  • An Omnidirectional Stereo Vision System Using a Single Camera
    18th International Conference on Pattern Recognition (ICPR'06), 2006
    Co-Authors: Sooyeong Yi, N. Ahuja
    Abstract:

    We describe a new omnidirectional stereo imaging system that uses a concave lens and a convex mirror to produce a stereo pair of images on the sensor of a conventional camera. The light incident from a Scene Point is split and directed to the camera in two parts. One part reaches camera directly after reflection from the convex mirror and forms a single-viewPoint omnidirectional image. The second part is formed by passing a subbeam of the reflected light from the mirror through a concave lens and forms a displaced single viewPoint image where the disparity depends on the depth of the Scene Point. A closed-form expression for depth is derived. Since the optical components used are simple and commercially available, the resulting system is compact and inexpensive. This, and the simplicity of the required image processing algorithms, make the proposed system attractive for real-time applications, such as autonomous navigation and object manipulation. The experimental prototype we have built is described

  • ICPR - On generating seamless mosaics with large depth of field
    Proceedings 15th International Conference on Pattern Recognition. ICPR-2000, 2000
    Co-Authors: M. Aggarwal, N. Ahuja
    Abstract:

    Imaging cameras have only finite depth of field and only those objects within that depth range are simultaneously in focus. The depth of field of a camera can be improved by mosaicing a sequence of images taken under different focal settings. In conventional mosaicing schemes, a focus measure is computed for every Scene Point across the image sequence and the Point is selected from that image where the focus measure is highest. We have, however, proved in this paper that the focus measure is not the highest in the best focussed frame for a certain class of Scene Points. The incorrect selection of image frames for these Points, causes visual artifacts to appear in the resulting mosaic. We have also proposed a method to isolate such Scene Points, and an algorithm to compose large depth of field mosaics without the undesirable artifacts.

  • Non-frontal imaging camera
    1997
    Co-Authors: N. Ahuja, Arun Krishnan
    Abstract:

    This thesis describes a Non-frontal Imaging Camera (NICAM) and applications of the NICAM in panoramic focused image acquisition and panoramic range estimation. Standard cameras have sensor planes that are perpendicular to the optic axis. This causes sharp focus for parts of the Scene that are all roughly at the same distance from the camera. When Scene Points are distributed over a range of distances from the sensor, creating a focused composite image involves changing some sensor parameters and obtaining a sequence of varying focus images. From this sequence a composite narrow view focused image for that view Point can be created. This same process repeated for every view Point will create a focused panoramic image. This process of mechanical motions (translation of sensor plane and panning of camera) and focus computations can be time intensive. The NICAM has a sensor surface where different sensor surface Points are at different distances from the lens. Depending upon where on the sensor surface the image of a Scene Point is formed (i.e., depending on the camera pan angle), the imaging parameter v will assume different values. This means that by controlling only the pan angle, we could achieve both goals of the traditional mechanical movements, namely, that of changing v values as well as that of scanning the visual field, in an integrated way. "Depth of field" (DOF) refers to the possible variation in depth of a Scene Point that does not affect the quality of image focus. Expressions for "depth of field" are derived for the tilted NICAM, and by extension, the depth of field for an arbitrary pixel on an arbitrary sensor surface. The thesis shows that closed form symbolic solutions can be obtained for the depth of field by solving equations of degree 4. The thesis also shows that rotating the standard camera about an axis through a Point f (focal length) in front of the lens center, along the optical axis, minimizes the number of parameter changes required to create a panoramic focused image. Experimental results on panoramic range estimation and focused image acquisition are also presented.

  • ICSC - A Nonfrontal Imaging Camera
    Lecture Notes in Computer Science, 1995
    Co-Authors: N. Ahuja
    Abstract:

    This talk will describe a new approach to visual imaging called nonfrontal imaging. This has lead to the design of a new type of camera which has three salient characteristics: It can provide panoramic images of upto 360 degree views of a Scene. Each object is in complete focus regardless of its location. The camera also delivers the coordinates of each focusable, visible Scene Point, in addition to and registered with a sharp image.