Synthetic Image

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 321 Experts worldwide ranked by ideXlab platform

Jingzhou Huang - One of the best experts on this subject based on the ideXlab platform.

  • Obtaining Urban Waterlogging Depths from Video Images Using Synthetic Image Data
    Remote Sensing, 2020
    Co-Authors: Jingchao Jiang, Cheng-zhi Qin, Changxiu Cheng, Junzhi Liu, Jingzhou Huang
    Abstract:

    Reference objects in video Images can be used to indicate urban waterlogging depths. The detection of reference objects is the key step to obtain waterlogging depths from video Images. Object detection models with convolutional neural networks (CNNs) have been utilized to detect reference objects. These models require a large number of labeled Images as the training data to ensure the applicability at a city scale. However, it is hard to collect a sufficient number of urban flooding Images containing valuable reference objects, and manually labeling Images is time-consuming and expensive. To solve the problem, we present a method to synthesize Image data as the training data. Firstly, original Images containing reference objects and original Images with water surfaces are collected from open data sources, and reference objects and water surfaces are cropped from these original Images. Secondly, the reference objects and water surfaces are further enriched via data augmentation techniques to ensure the diversity. Finally, the enriched reference objects and water surfaces are combined to generate a Synthetic Image dataset with annotations. The Synthetic Image dataset is further used for training an object detection model with CNN. The waterlogging depths are calculated based on the reference objects detected by the trained model. A real video dataset and an artificial Image dataset are used to evaluate the effectiveness of the proposed method. The results show that the detection model trained using the Synthetic Image dataset can effectively detect reference objects from Images, and it can achieve acceptable accuracies of waterlogging depths based on the detected reference objects. The proposed method has the potential to monitor waterlogging depths at a city scale.

Olof Bryngdahl - One of the best experts on this subject based on the ideXlab platform.

  • Generalized model of Synthetic Image hologram structures
    Practical Holography X, 1996
    Co-Authors: Andreas Jendral, Olof Bryngdahl
    Abstract:

    Three basic types of Synthetic near-field hologram structures have been used for 3D-display purposes. The conventional Image hologram, the holographic stereogram and the partial pixel architecture. We show that these methods are clearly related and use this result for the design of an efficient algorithm for the generation of Synthetic Image hologram structures.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

  • Synthetic Image holograms: computation and properties
    Optics Communications, 1994
    Co-Authors: Andreas Jendral, Ralf Bräuer, Olof Bryngdahl
    Abstract:

    Abstract Synthetic Image holograms with apertures of at least several centimeters have been suggested by Leseberg using a special optical reconstruction setup. Here it is demonstrated how a lensless setup can be chosen without degrading the quality of the reconstruction. The Synthetic Image holograms have minimum requirements on the coherence of the light used for reconstruction. Optical reconstructions are shown using an extended white light source.

Nicholas Ayache - One of the best experts on this subject based on the ideXlab platform.

  • cardiac electrophysiological activation pattern estimation from Images using a patient specific database of Synthetic Image sequences
    IEEE Transactions on Biomedical Engineering, 2014
    Co-Authors: Adityo Prakosa, Maxime Sermesant, Pascal Allain, Nicolas Villain, Aldo C Rinaldi, Kawal Rhode, Reza Razavi, Herve Delingette, Nicholas Ayache
    Abstract:

    While abnormal patterns of cardiac electrophysiological activation are at the origin of important cardiovascular diseases (e.g., arrhythmia, asynchrony), the only clinically available method to observe detailed left ventricular endocardial surface activation pattern is through invasive catheter mapping. However, this electrophysiological activation controls the onset of the mechanical contraction; therefore, important information about the electrophysiology could be deduced from the detailed observation of the resulting motion patterns. In this paper, we present the study of this inverse cardiac electrokinematic relationship. The objective is to predict the activation pattern knowing the cardiac motion from the analysis of cardiac Image sequences. To achieve this, we propose to create a rich patient-specific database of Synthetic time series of the cardiac Images using simulations of a personalized cardiac electromechanical model, in order to study this complex relationship between electrical activity and kinematic patterns in the context of this specific patient. We use this database to train a machine-learning algorithm which estimates the depolarization times of each cardiac segment from global and regional kinematic descriptors based on displacements or strains and their derivatives. Finally, we use this learning to estimate the patient's electrical activation times using the acquired clinical Images. Experiments on the inverse electrokinematic learning are demonstrated on Synthetic sequences and are evaluated on clinical data with promising results. The error calculated between our prediction and the invasive intracardiac mapping ground truth is relatively small (around 10 ms for ischemic patients and 20 ms for nonischemic patient). This approach suggests the possibility of noninvasive electrophysiological pattern estimation using cardiac motion imaging.

  • Cardiac motion estimation using a proactive deformable model: evaluation and sensitivity analysis
    2010
    Co-Authors: Ken C.l. Wong, Maxime Sermesant, Herve Delingette, Florence Billet, Tommaso Mansi, Radomir Chabiniok, Nicholas Ayache
    Abstract:

    To regularize cardiac motion recovery from medical Images, electromechanical models are increasingly popular for providing a priori physiological motion information. Although these models are macroscopic, there are still many parameters to be specified for accurate and robust recovery. In this paper, we provide a sensitivity analysis of a pro-active electromechanical model-based cardiac motion tracking framework by studying the impacts of its model parameters. Our sensitivity analysis differs from other works by evaluating the motion recovery through a Synthetic Image sequence with known displacement field as well as cine and tagged MRI sequences. This analysis helps to identify which parameters should be estimated from patient-specific data and which ones can have their values set from the literature.

John R. Schott - One of the best experts on this subject based on the ideXlab platform.

  • Incorporation of polarization into the DIRSIG Synthetic Image generation model
    Imaging Spectrometry VIII, 2002
    Co-Authors: Jason P. Meyers, John R. Schott, Scott D. Brown
    Abstract:

    The Digital Imaging and Remote Sensing Synthetic Image Generation (DIRSIG) model uses a quantitative first principles approach to generate Synthetic hyperspectral Imagery. This paper presents the methods used to add modeling of polarization phenomenology. The radiative transfer equations were modified to use Stokes vectors for the radiance values and Mueller matrices for the energy-matter interactions. The use of Stokes vectors enables a full polarimetric characterization of the illumination and sensor reaching radiances. The bi-directional reflectance distribution function (BRDF) module was rewritten and modularized to accommodate a variety of polarized and unpolarized BRDF models. Two new BRDF models based on Torrance-Sparrow and Beard-Maxwell were added to provide polarized BRDF estimations. The sensor polarization characteristics are modeled using Mueller matrix transformations on a per pixel basis. All polarized radiative transfer calculations are performed spectrally to preserve the hyperspectral capabilities of DIRSIG. Integration over sensor bandpasses is handled by the sensor module.

  • an advanced Synthetic Image generation model and its application to multi hyperspectral algorithm development
    Canadian Journal of Remote Sensing, 1999
    Co-Authors: John R. Schott, Rolando V. Raqueno, Harry N. Gross, S.d. Brown, Gary Robinson
    Abstract:

    RESUMELa necessite de disposer d'ensembles de donnees robustes pour le developpement et la mise au point d'algorithmes a suscite un interet pour les Images synthetiques en tant que supplement aux Images reelles. Les algorithmes avances s'appuient fortement sur les Images synthetiques pour reproduire a la fois les caracteristiques spectro-radiometriques et spatiales des Images reelles. Cet article decrit un modele de generation d'Image synthetique (SIG) visant a inclure les processus radiometriques qui affectent la formation et la capture d'Images spectrales. En particulier, il s'interesse aux developpements recents dans le domaine de la modelisation SIG qui tentent de capturer la correlation spatiale/spectrale inherente aux Images reelles. On met en relief les besoins du processus SIG a l'aide de deux algorithmes avances faisant appel aux Images SIG en support au developpement d'algorithmes.

  • Advanced Synthetic Image generation models and their application to multi/hyperspectral algorithm development
    27th AIPR Workshop: Advances in Computer-Assisted Recognition, 1999
    Co-Authors: John R. Schott, Scott D. Brown, Rolando V. Raqueno, Harry N. Gross, Gary D. Robinson
    Abstract:

    The need for robust Image data sets for algorithm development and testing has prompted the consideration of Synthetic Imagery as a supplement to real Imagery. The unique ability of Synthetic Image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire Image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on Synthetic Imagery to reproduce both the spectro-radiometric and spatial character observed in real Imagery. This paper describes a Synthetic Image generation model that strives to include the radiometric processes that affect spectral Image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real Images. The model is capable of simultaneously generating Imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal Image inputs for broadband, multi- and hyper-spectral exploitation algorithms.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

  • Incorporation of enhanced texture/transition modeling tools into a Synthetic Image generation model
    IGARSS '98. Sensing and Managing the Environment. 1998 IEEE International Geoscience and Remote Sensing. Symposium Proceedings. (Cat. No.98CH36174), 1998
    Co-Authors: John R. Schott, S.d. Brown
    Abstract:

    The authors show that the incorporation of patterned spatial mixing between classes and within class texture can greatly increase the fidelity of Synthetic Image generation (SIG) for both visual assessment and quantitative analysis using multi/hyperspectral analysis tools. We have demonstrated how within and between classes, spatial/spectral variability can be introduced into a SIG model that includes a full radiometric processing chain. Further effort in this area needs to focus on exercising this capability for specific scenarios and evaluating the quantitative fidelity of the resulting Images.

  • Prediction of observed Image spectra using Synthetic Image generation models
    Imaging Spectrometry III, 1997
    Co-Authors: John R. Schott, Shiao Didi Kuo, Scott D. Brown, Rolando V. Raqueno
    Abstract:

    Most spectrometric Image analysis algorithms either require or can be augmented by estimates of target/background spectral signatures. The prediction of these spectra is complicated by the complex interplay of the target's spectra, background spectra, energy matter interaction effects, atmospheric effects and sensor response, an noise effects. Signatures can be further confused in the thermal IR by the temperature and temperature variation of targets and backgrounds. Finally, in nearly all cases, the Image signature is the result of spatial mixing of target and background spectra. This paper addresses the potential for using Synthetic Image generation modeling tools to help in the prediction and understanding of hyperspectral signature. The DIRSIG model is discussed in terms of how it deals spectrally with target/background interactions, atmospheric propagation and sensor spectral, geometric MTF and noise effects. The DIRSIG model enables the estimation of mixed pixel 'Image' spectra as they would be observed by an actual system imaging a complex 3D scene.

Robert W. Maxson - One of the best experts on this subject based on the ideXlab platform.

  • Comparison of areal extent of snow as determined by AVHRR and SSM/I satellite Imagery
    1992
    Co-Authors: Robert W. Maxson
    Abstract:

    Abstract : Advanced Very High Radiometric (AVHRR) and Special Sensor Microwave Imager (SSM/I) Imagery are compared to determine the areal extent of snow. A multi-spectral AVHRR algorithm, utilizing channels 1 (0.63micro m), 2 (0-87 am), 3 (3.7micro m), and 4 (11.0micro m), creates a Synthetic Image that classified land, snow, water and clouds. The classified Images created by this algorithm serve as a baseline for a second algorithm that examines spatially and temporally matched SSM/I Imagery. The SSM/I separation algorithm uses the 85 GHz horizontally polarized channel as well as the 37 GHz horizontally and vertically polarized channels. The Synthetic Image created by this algorithm classifies land, snow and water. Both separation algorithms use empirically derived separation thresholds obtained from bi-spectral scatter plots. Separation is made at a given pixel location based on the radiative identity assigned to that location from various wavelength combinations. The AVHRR data provides high resolution, daytime Images of the snow pack but is completely dependent on the absence of clouds to view this ground based feature. The SSM/I data gives lower resolution Imagery of the snow during daylight or night time satellite passes and is not affected by the presence of nonprecipitating clouds. A total of 12 sub scenes are analyzed using both data sets and general agreement of the two sets of Imagery is established. AVHRR, Imagery, satellite, SSM/I, snow.

  • comparison of areal extent of snow as determined by avhrr and ssm i satellite Imagery
    1992
    Co-Authors: Robert W. Maxson
    Abstract:

    Abstract : Advanced Very High Radiometric (AVHRR) and Special Sensor Microwave Imager (SSM/I) Imagery are compared to determine the areal extent of snow. A multi-spectral AVHRR algorithm, utilizing channels 1 (0.63micro m), 2 (0-87 am), 3 (3.7micro m), and 4 (11.0micro m), creates a Synthetic Image that classified land, snow, water and clouds. The classified Images created by this algorithm serve as a baseline for a second algorithm that examines spatially and temporally matched SSM/I Imagery. The SSM/I separation algorithm uses the 85 GHz horizontally polarized channel as well as the 37 GHz horizontally and vertically polarized channels. The Synthetic Image created by this algorithm classifies land, snow and water. Both separation algorithms use empirically derived separation thresholds obtained from bi-spectral scatter plots. Separation is made at a given pixel location based on the radiative identity assigned to that location from various wavelength combinations. The AVHRR data provides high resolution, daytime Images of the snow pack but is completely dependent on the absence of clouds to view this ground based feature. The SSM/I data gives lower resolution Imagery of the snow during daylight or night time satellite passes and is not affected by the presence of nonprecipitating clouds. A total of 12 sub scenes are analyzed using both data sets and general agreement of the two sets of Imagery is established. AVHRR, Imagery, satellite, SSM/I, snow.