Fused Image

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 15585 Experts worldwide ranked by ideXlab platform

Yi Chai - One of the best experts on this subject based on the ideXlab platform.

  • a novel geometric dictionary construction approach for sparse representation based Image fusion
    Entropy, 2017
    Co-Authors: Kunpeng Wang, Zhiqin Zhu, Yi Chai
    Abstract:

    Sparse-representation based approaches have been integrated into Image fusion methods in the past few years and show great performance in Image fusion. Training an informative and compact dictionary is a key step for a sparsity-based Image fusion method. However, it is difficult to balance “informative” and “compact”. In order to obtain sufficient information for sparse representation in dictionary construction, this paper classifies Image patches from source Images into different groups based on morphological similarities. Stochastic coordinate coding (SCC) is used to extract corresponding Image-patch information for dictionary construction. According to the constructed dictionary, Image patches of source Images are converted to sparse coefficients by the simultaneous orthogonal matching pursuit (SOMP) algorithm. At last, the sparse coefficients are Fused by the Max-L1 fusion rule and inverted to a Fused Image. The comparison experimentations are simulated to evaluate the Fused Image in Image features, information, structure similarity, and visual perception. The results confirm the feasibility and effectiveness of the proposed Image fusion solution.

  • A novel approach for multimodal medical Image fusion
    Expert Systems With Applications, 2014
    Co-Authors: Yi Chai, Simon X. Yang
    Abstract:

    Fusion of multimodal medical Images increases robustness and enhances accuracy in biomedical research and clinical diagnosis. It attracts much attention over the past decade. In this paper, an efficient multimodal medical Image fusion approach based on compressive sensing is presented to fuse computed tomography (CT) and magnetic resonance imaging (MRI) Images. The significant sparse coefficients of CT and MRI Images are acquired via multi-scale discrete wavelet transform. A proposed weighted fusion rule is utilized to fuse the high frequency coefficients of the source medical Images; while the pulse coupled neural networks (PCNN) fusion rule is exploited to fuse the low frequency coefficients. Random Gaussian matrix is used to encode and measure. The Fused Image is reconstructed via Compressive Sampling Matched Pursuit algorithm (CoSaMP). To show the efficiency of the proposed approach, several comparative experiments are conducted. The results reveal that the proposed approach achieves better Fused Image quality than the existing state-of-the-art methods. Furthermore, the novel fusion approach has the superiority of high stability, good flexibility and low time consumption.

  • a new fusion scheme for multifocus Images based on focused pixels detection
    Machine Vision Applications, 2013
    Co-Authors: Yi Chai
    Abstract:

    In this paper, a new multifocus Image fusion scheme based on the technique of focused pixels detection is proposed. First, a new improved multiscale Top-Hat (MTH) transform, which is more effective than the traditional Top-Hat transform in extracting focus information, is introduced and utilized to detect the pixels of the focused regions. Second, the initial decision map of the source Images is generated by comparing the improved MTH value of each pixel. Then, the isolated regions removal method is developed and employed to refine the initial decision map. In order to improve the quality of the Fused Image and avoid the discontinuity in the transition zone, a dual sliding window technique and a fusion strategy based on multiscale transform are developed to achieve the transition zones fusion. Finally, the decision maps of the focused regions and the transition zones are both used to guide the fusion process, and then the final Fused Image is formed. The experimental results show that the proposed method outperforms the conventional multifocus Image fusion methods in both subjective and objective qualities.

  • multifocus Image fusion based on features contrast of multiscale products in nonsubsampled contourlet transform domain
    Optik, 2012
    Co-Authors: Yi Chai, Xiaoyang Zhang
    Abstract:

    Abstract In this paper, an efficient multifocus Image fusion approach is proposed based on local features contrast of multiscale products in nonsubsampled contourlet transform (NSCT) domain. In order to improve the robustness of the fusion algorithm to the noise and select the coefficients of the Fused Image properly, the multiscale products, which can distinguish edge structures from noise more effectively in NSCT domain, is developed and introduced into Image fusion field. The selection principles of different subband coefficients obtained by the NSCT decomposition are discussed in detail. To improve the quality of the Fused Image, novel different local features contrast measurements, which are proved to be more suitable for human vision system and can extract more useful detail information from source Images and inject them into the Fused Image, are developed and used to select coefficients from the clear parts of subImages to compose coefficients of Fused Images. Experimental results demonstrate the proposed method performs very well in fusion both noisy and noise-free multifocus Images, and outperform conventional methods in terms of both visual quality and objective evaluation criteria.

  • multifocus Image fusion and denoising scheme based on homogeneity similarity
    Optics Communications, 2012
    Co-Authors: Yi Chai, Hongpeng Yin, Guoquan Liu
    Abstract:

    Abstract A novel Image fusion algorithm based on homogeneity similarity is proposed in this paper, aiming at solving the fusion problem of clean and noisy multifocus Images. Firstly, the initial Fused Image is acquired with one multiresolution Image fusion method. The pixels of the source Images, which are similar to the corresponding initial Fused Image pixels, are considered to be located in the sharply focused regions. By this method, the initial focused regions are determined. In order to improve the fusion performance, morphological opening and closing are employed for post-processing. Secondly, the homogeneity similarity is introduced and used to fuse the clean and noisy multifocus Images. Finally, the Fused Image is obtained by weighting the neighborhood pixels of the point of source Images which are located at the focused region. Experimental results demonstrate that, for the clean multifocus Image fusion, the proposed method performs better than some popular Image fusion methods in both subjective and objective qualities. Furthermore, it can simultaneously resolve the Image restoration and fusion problem when the source multifocus Images are corrupted by the Gaussian white noise, and can also provide better performance than the conventional methods.

Zhihua Zhao - One of the best experts on this subject based on the ideXlab platform.

  • multi focus Image fusion based on non negative matrix factorization and difference Images
    Signal Processing, 2014
    Co-Authors: Yongxin Zhang, Li Chen, Jian Jia, Zhihua Zhao
    Abstract:

    Abstract Multi-focus Image fusion means to fuse multiple source Images with different focus settings into one Image, so that the resulting Image appears sharper. One of the keys to Image fusion is how to represent the source Images effectively and completely. To address this problem, in this study a novel multi-focus Image fusion scheme based on non-negative matrix factorization (NMF) and difference Images is proposed. The temporary Fused Image is constructed by fusing the registered source Images with NMF. The focused regions of the source Images are detected by the salient features of the difference Images between the temporary Fused Image and source Images. The final Fused Image is produced by combining the focused regions. The experimental results demonstrate that the proposed method is capable of efficiently representing the source Images and significantly improving the fusion quality compared to the other existing fusion methods, in terms of visual and quantitative evaluations.

Bin Xiao - One of the best experts on this subject based on the ideXlab platform.

  • anatomical functional Image fusion by information of interest in local laplacian filtering domain
    IEEE Transactions on Image Processing, 2017
    Co-Authors: Bin Xiao
    Abstract:

    A novel method for performing anatomical magnetic resonance imaging-functional (positron emission tomography or single photon emission computed tomography) Image fusion is presented. The method merges specific feature information from input Image signals of a single or multiple medical imaging modalities into a single Fused Image, while preserving more information and generating less distortion. The proposed method uses a local Laplacian filtering-based technique realized through a novel multi-scale system architecture. First, the input Images are generated in a multi-scale Image representation and are processed using local Laplacian filtering. Second, at each scale, the decomposed Images are combined to produce Fused approximate Images using a local energy maximum scheme and produce the Fused residual Images using an information of interest-based scheme. Finally, a Fused Image is obtained using a reconstruction process that is analogous to that of conventional Laplacian pyramid transform. Experimental results computed using individual multi-scale analysis-based decomposition schemes or fusion rules clearly demonstrate the superiority of the proposed method through subjective observation as well as objective metrics. Furthermore, the proposed method can obtain better performance, compared with the state-of-the-art fusion methods.

  • medical Image fusion by combining parallel features on multi scale local extrema scheme
    Knowledge Based Systems, 2016
    Co-Authors: Bin Xiao, Qamar Nawaz
    Abstract:

    Applying color saliency feature algorithm on PET Image to get functional information.Applying canny operator on MRI and CT Image to get the anatomical structural information.Entropy of Image is selected as weight for fusing smoothed Images at different scales.Variance of luminance Image is selected as weight for fusing detailed Images at different scales. Two efficient Image fusion algorithms are proposed for constructing a Fused Image through combining parallel features on multi-scale local extrema scheme. Firstly, the source Image is decomposed into a series of smoothed and detailed Images at different scales by local extrema scheme. Secondly, the parallel features of edge and color are extracted to get the saliency maps. The edge saliency weighted map aims to preserve the structural information using Canny edge detection operator; Meanwhile, the color saliency weighted map works for extracting the color and luminance information by context-aware operator. Thirdly, the average and weighted average schemes are used as the fusion rules for grouping the coefficients of weighted maps obtained from smoothed and detailed Images. Finally, the Fused Image is reconstructed by the Fused smoothed and the Fused detailed Images. Experimental results demonstrate that the proposed algorithms show the best performances among the other fusion methods in the domain of MRI-CT and MRI-PET fusion.

  • union laplacian pyramid with multiple features for medical Image fusion
    Neurocomputing, 2016
    Co-Authors: Bin Xiao, Qamar Nawaz
    Abstract:

    The Laplacian pyramid has been widely used for decomposing Images into multiple scales. However, the Laplacian pyramid is believed as being unable to represent outline and contrast of the Images well. To tackle these tasks, an approach union Laplacian pyramid with multiple features is presented for accurately transferring salient features from the input medical Images into a single Fused Image. Firstly, the input Images are transformed into their multi-scale representations by Laplacian pyramid. Secondly, the contrast feature map and outline feature map are extracted from the Images at each scale, respectively. Thirdly, after extracting the multiple features, an efficient fusion scheme is developed to combine the pyramid coefficients. Lastly, the Fused Image is obtained by a reconstruction process of the inversed pyramid. Visual and statistical analyses show that the quality of Fused Image can be significantly improved over that of typical Image quality assessment metrics in terms of structural similarity, peak-signal-to-noise ratio, standard deviation, and tone mapped Image quality index metrics. The contrast is also well preserved by histogram analysis of Images. Affine transformation is introduced in pyramid achieving multi-orientations.Kirsch method is used to highlight the contrast of the Images.PCA method is for highlighting the contrast of the Images.The averaging different orientation is to preserve the structure.Histogram of Images in experimental part is to evaluate contrast.

Yong Yang - One of the best experts on this subject based on the ideXlab platform.

  • multifocus Image fusion based on extreme learning machine and human visual system
    IEEE Access, 2017
    Co-Authors: Yong Yang, Mei Yang, Shuying Huang, Yue Que, Min Ding, Jun Sun
    Abstract:

    Multifocus Image fusion generates a single Image by combining redundant and complementary information of multiple Images coming from the same scene. The combination includes more information of the scene than any of the individual source Images. In this paper, a novel multifocus Image fusion method based on extreme learning machine (ELM) and human visual system is proposed. Three visual features that reflect the clarity of a pixel are first extracted and used to train the ELM to judge which pixel is clearer. The clearer pixels are then used to construct the initial Fused Image. Second, we measure the similarity between the source Image and the initial Fused Image and perform morphological opening and closing operations to obtain the focused regions. Lastly, the final Fused Image is achieved by employing a fusion rule in the focus regions and the initial Fused Image. Experimental results indicate that the proposed method is more effective and better than other series of existing popular fusion methods in terms of both subjective and objective evaluations.

  • multifocus Image fusion based on nsct and focused area detection
    IEEE Sensors Journal, 2014
    Co-Authors: Yong Yang, Song Tong, Shuying Huang
    Abstract:

    To overcome the difficulties of sub-band coefficients selection in multiscale transform domain-based Image fusion and solve the problem of block effects suffered by spatial domain-based Image fusion, this paper presents a novel hybrid multifocus Image fusion method. First, the source multifocus Images are decomposed using the nonsubsampled contourlet transform (NSCT). The low-frequency sub-band coefficients are Fused by the sum-modified-Laplacian-based local visual contrast, whereas the high-frequency sub-band coefficients are Fused by the local Log-Gabor energy. The initial Fused Image is subsequently reconstructed based on the inverse NSCT with the Fused coefficients. Second, after analyzing the similarity between the previous Fused Image and the source Images, the initial focus area detection map is obtained, which is used for achieving the decision map obtained by employing a mathematical morphology postprocessing technique. Finally, based on the decision map, the final Fused Image is obtained by selecting the pixels in the focus areas and retaining the pixels in the focus region boundary as their corresponding pixels in the initial Fused Image. Experimental results demonstrate that the proposed method is better than various existing transform-based fusion methods, including gradient pyramid transform, discrete wavelet transform, NSCT, and a spatial-based method, in terms of both subjective and objective evaluations.

  • medical Image fusion via an effective wavelet based approach
    EURASIP Journal on Advances in Signal Processing, 2010
    Co-Authors: Yong Yang, Dong Sun Park, Shuying Huang
    Abstract:

    A novel wavelet-based approach for medical Image fusion is presented, which is developed by taking into not only account the characteristics of human visual system (HVS) but also the physical meaning of the wavelet coefficients. After the medical Images to be Fused are decomposed by the wavelet transform, different-fusion schemes for combining the coefficients are proposed: coefficients in low-frequency band are selected with a visibility-based scheme, and coefficients in high-frequency bands are selected with a variance based method. To overcome the presence of noise and guarantee the homogeneity of the Fused Image, all the coefficients are subsequently performed by a window-based consistency verification process. The Fused Image is finally constructed by the inverse wavelet transform with all composite coefficients. To quantitatively evaluate and prove the performance of the proposed method, series of experiments and comparisons with some existing fusion methods are carried out in the paper. Experimental results on simulated and real medical Images indicate that the proposed method is effective and can get satisfactory fusion results.

Zhengye Zhang - One of the best experts on this subject based on the ideXlab platform.

  • nonwovens structure measurement based on nsst multi focus Image fusion
    Micron, 2019
    Co-Authors: Yang Chen, Na Deng, Binjie Xin, Wenyu Xing, Zhengye Zhang
    Abstract:

    In the digital optical microscope, the depth of field cannot clearly display all fibers in the same Image, due to the thickness of nonwovens. A new multi-focus Image fusion algorithm based on non-subsampled shearlet transform (NSST) is proposed to improve the quality of Fused Image, which realizes the fusion of a series of Images taken from the same perspective and makes all fibers clearly within a single Image. The rule of large absolute value is used to fuse the high frequency sub-band and the rule of large regional variance is used to fuse the low frequency sub-band. Comparing the method with other methods, the superiority of the method can be seen from several indicators of Image quality evaluation. Based on the Fused Image, the diameter and orientation are measured by Hough transform and Image preprocessing, and automatic measurement is realized. The porosity is measured by identifying pores, which is fast and convenient. Experiments show that the measurement of nonwoven fabric structure can be quickly achieved based on Image processing.