Pixel Location

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 79209 Experts worldwide ranked by ideXlab platform

James G Nagy - One of the best experts on this subject based on the ideXlab platform.

  • semi blind sparse affine spectral unmixing of autofluorescence contaminated micrographs
    Bioinformatics, 2020
    Co-Authors: Blair J Rossetti, Steven A Wilbert, Jessica Mark L Welch, Gary G Borisy, James G Nagy
    Abstract:

    Motivation Spectral unmixing methods attempt to determine the concentrations of different fluorophores present at each Pixel Location in an image by analyzing a set of measured emission spectra. Unmixing algorithms have shown great promise for applications where samples contain many fluorescent labels; however, existing methods perform poorly when confronted with autofluorescence-contaminated images. Results We propose an unmixing algorithm designed to separate fluorophores with overlapping emission spectra from contamination by autofluorescence and background fluorescence. First, we formally define a generalization of the linear mixing model, called the affine mixture model (AMM), that specifically accounts for background fluorescence. Second, we use the AMM to derive an affine nonnegative matrix factorization method for estimating fluorophore endmember spectra from reference images. Lastly, we propose a semi-blind sparse affine spectral unmixing (SSASU) algorithm that uses knowledge of the estimated endmembers to learn the autofluorescence and background fluorescence spectra on a per-image basis. When unmixing real-world spectral images contaminated by autofluorescence, SSASU greatly improved proportion indeterminacy as compared to existing methods for a given relative reconstruction error. Availability and implementation The source code used for this paper was written in Julia and is available with the test data at https://github.com/brossetti/ssasu.

  • semi blind sparse affine spectral unmixing of autofluorescence contaminated micrographs
    bioRxiv, 2019
    Co-Authors: Blair J Rossetti, Steven A Wilbert, Jessica Mark L Welch, Gary G Borisy, James G Nagy
    Abstract:

    Abstract Spectral unmixing methods attempt to determine the concentrations of different fluorophores present at each Pixel Location in an image by analyzing a set of measured emission spectra. Unmixing algorithms have shown great promise for applications where samples contain many fluorescent labels; however, existing methods perform poorly when confronted with autofluorescence-contaminated images. We propose an unmixing algorithm designed to separate fluorophores with overlapping emission spectra from contamination by autofluorescence and background fluorescence. First, we formally define a generalization of the linear mixing model, called the affine mixture model (AMM), that specifically accounts for background fluorescence. Second, we use the AMM to derive an affine nonnegative matrix factorization method for estimating fluorophore endmember spectra from reference images. Lastly, we propose a semi-blind sparse affine spectral unmixing (SSASU) algorithm that uses knowledge of the estimated endmembers to learn the autofluorescence and background fluorescence spectra on a per-image basis. When unmixing real-world spectral images contaminated by autofluorescence, SSASU greatly improved proportion indeterminacy as compared to existing methods for a given relative reconstruction error. The source code used for this paper was written in Julia and is available with the test data at https://github.com/brossetti/ssasu.

Y Altunbasak - One of the best experts on this subject based on the ideXlab platform.

  • edge strength filter based color filter array interpolation
    IEEE Transactions on Image Processing, 2012
    Co-Authors: Ibrahim Pekkucuksen, Y Altunbasak
    Abstract:

    For economic reasons, most digital cameras use color filter arrays instead of beam splitters to capture image data. As a result of this, only one of the required three color samples becomes available at each Pixel Location and the other two need to be interpolated. This process is called Color Filter Array (CFA) interpolation or demosaicing. Many demosaicing algorithms have been introduced over the years to improve subjective and objective interpolation quality. We propose an orientation-free edge strength filter and apply it to the demosaicing problem. Edge strength filter output is utilized both to improve the initial green channel interpolation and to apply the constant color difference rule adaptively. This simple edge directed method yields visually pleasing results with high CPSNR.

  • edge oriented directional color filter array interpolation
    International Conference on Acoustics Speech and Signal Processing, 2011
    Co-Authors: Ibrahim Pekkucuksen, Y Altunbasak
    Abstract:

    Most of the current digital cameras feature a single sensor design which limits the number of channels recorded at each Pixel Location to one. However, a color image is represented with three channels for each Pixel. Color Filter Array (CFA) interpolation is the process of generating a full three channel color image from a single channel mosaicked input. We propose a simple edge strength filter to interpolate the missing color values adaptively. While the filter is readily applicable to the Bayer mosaic pattern, we argue that the same idea could be extended to other mosaic patterns and describe its application to the Lukac mosaic pattern. The proposed solution outperforms other available algorithms for the Lukac pattern in terms of both objective and subjective comparison.

  • restoration of bayer sampled image sequences
    The Computer Journal, 2009
    Co-Authors: Murat Gevrekci, Bahadir K Gunturk, Y Altunbasak
    Abstract:

    Spatial resolution of digital images are limited due to optical/sensor blurring and sensor site density. In single-chip digital cameras, the resolution is further degraded because such devices use a color filter array to capture only one spectral component at a Pixel Location. The process of estimating the missing two color values at each Pixel Location is known as demosaicking. Demosaicking methods usually exploit the correlation among color channels. When there are multiple images, it is possible not only to have better estimates of the missing color values but also to improve the spatial resolution further (using super-resolution reconstruction). In this paper, we propose a multi-frame spatial resolution enhancement algorithm based on the projections onto convex sets technique.

  • pocs based restoration of bayer sampled image sequences
    International Conference on Acoustics Speech and Signal Processing, 2007
    Co-Authors: Murat Gevrekci, Bahadir K Gunturk, Y Altunbasak
    Abstract:

    Spatial resolution of digital images are limited due to optical/sensor blurring and sensor site density. In single-chip digital cameras, the resolution is further degraded because such devices use a color filter array to capture only one spectral component at a Pixel Location. The process of estimating the missing two color values at each Pixel Location is known as demosaicking. Demosaicking methods usually exploit the correlation among color channels. When there are multiple images, it is possible not only to have better estimates of the missing color values but also to improve the spatial resolution further (using super-resolution reconstruction). Previously, we have proposed a demosaicking algorithm based on the projection onto convex sets (POCS) technique. In this paper, we improve the results of that algorithm adding a new constraint set based on the spatio-intensity neighborhood. We extend the algorithm to image sequences for multi-frame demosaicking and super resolution.

  • image local contrast enhancement using adaptive non linear filters
    International Conference on Image Processing, 2006
    Co-Authors: Tarik Arici, Y Altunbasak
    Abstract:

    We present a locally adaptive non-linear (YENI) filter to obtain the unsharp mask of an image. The unsharp mask obtained by the YENI filter preserves the edges in the image while filtering out the local details, which correspond to mid-range frequencies in the spectrum. The enhanced image using this unsharp mask effectively prevents over/under (o/u) shooting artifacts often observed with other unsharp masking techniques. The enhanced frequency range also spans lower frequencies compared to the techniques that are based on Laplacian filter variants. This improves the visual quality of the image, as measured subjectively and objectively in the real-video experiments. Furthermore, since the YENI filter reduces to an IIR filter at each Pixel Location, it has a low computational complexity.

Clark N Taylor - One of the best experts on this subject based on the ideXlab platform.

  • vision based target geo Location using a fixed wing miniature air vehicle
    Journal of Intelligent and Robotic Systems, 2006
    Co-Authors: Blake D Barber, Joshua Redding, Timothy W Mclain, Randal W Beard, Clark N Taylor
    Abstract:

    This paper presents a method for determining the GPS Location of a ground-based object when imaged from a fixed-wing miniature air vehicle (MAV). Using the Pixel Location of the target in an image, measurements of MAV position and attitude, and camera pose angles, the target is localized in world coordinates. The main contribution of this paper is to present four techniques for reducing the localization error. In particular, we discuss RLS filtering, bias estimation, flight path selection, and wind estimation. The localization method has been implemented and flight tested on BYU's MAV testbed and experimental results are presented demonstrating the localization of a target to within 3 m of its known GPS Location.

  • vision based target localization from a fixed wing miniature air vehicle
    American Control Conference, 2006
    Co-Authors: Joshua Redding, Timothy W Mclain, Randal W Beard, Clark N Taylor
    Abstract:

    This paper presents a method for localizing a ground-based object when imaged from a small fixed-wing unmanned aerial vehicle (UAV). Using the Pixel Location of the target in an image, with measurements of UAV position and attitude, and camera pose angles, the target is localized in world coordinates. This paper presents a study of possible error sources and localization sensitivities to each source. The localization method has been implemented and experimental results are presented demonstrating the localization of a target to within 11 m of its known Location.

Adrian G Dyer - One of the best experts on this subject based on the ideXlab platform.

  • Differentiating biological colours with few and many sensors: Spectral reconstruction with RGB and hyperspectral cameras
    PLoS ONE, 2015
    Co-Authors: Jair E. Garcia, Madeline B. Girard, M. Kasumovic, Philip A. Wilksch, Phred Petersen, Adrian G Dyer
    Abstract:

    Background\r\n\r\nThe ability to discriminate between two similar or progressively dissimilar colours is important for many animals as it allows for accurately interpreting visual signals produced by key target stimuli or distractor information. Spectrophotometry objectively measures the spectral characteristics of these signals, but is often limited to point samples that could underestimate spectral variability within a single sample. Algorithms for RGB images and digital imaging devices with many more than three channels, hyperspectral cameras, have been recently developed to produce image spectrophotometers to recover reflectance spectra at individual Pixel Locations. We compare a linearised RGB and a hyperspectral camera in terms of their individual capacities to discriminate between colour targets of varying perceptual similarity for a human observer.\r\nMain Findings\r\n\r\n(1) The colour discrimination power of the RGB device is dependent on colour similarity between the samples whilst the hyperspectral device enables the reconstruction of a unique spectrum for each sampled Pixel Location independently from their chromatic appearance. (2) Uncertainty associated with spectral reconstruction from RGB responses results from the joint effect of metamerism and spectral variability within a single sample.\r\nConclusion\r\n\r\n(1) RGB devices give a valuable insight into the limitations of colour discrimination with a low number of photoreceptors, as the principles involved in the interpretation of photoreceptor signals in trichromatic animals also apply to RGB camera responses. (2) The hyperspectral camera architecture provides means to explore other important aspects of colour vision like the perception of certain types of camouflage and colour constancy where multiple, narrow-band sensors increase resolution.

Blair J Rossetti - One of the best experts on this subject based on the ideXlab platform.

  • semi blind sparse affine spectral unmixing of autofluorescence contaminated micrographs
    Bioinformatics, 2020
    Co-Authors: Blair J Rossetti, Steven A Wilbert, Jessica Mark L Welch, Gary G Borisy, James G Nagy
    Abstract:

    Motivation Spectral unmixing methods attempt to determine the concentrations of different fluorophores present at each Pixel Location in an image by analyzing a set of measured emission spectra. Unmixing algorithms have shown great promise for applications where samples contain many fluorescent labels; however, existing methods perform poorly when confronted with autofluorescence-contaminated images. Results We propose an unmixing algorithm designed to separate fluorophores with overlapping emission spectra from contamination by autofluorescence and background fluorescence. First, we formally define a generalization of the linear mixing model, called the affine mixture model (AMM), that specifically accounts for background fluorescence. Second, we use the AMM to derive an affine nonnegative matrix factorization method for estimating fluorophore endmember spectra from reference images. Lastly, we propose a semi-blind sparse affine spectral unmixing (SSASU) algorithm that uses knowledge of the estimated endmembers to learn the autofluorescence and background fluorescence spectra on a per-image basis. When unmixing real-world spectral images contaminated by autofluorescence, SSASU greatly improved proportion indeterminacy as compared to existing methods for a given relative reconstruction error. Availability and implementation The source code used for this paper was written in Julia and is available with the test data at https://github.com/brossetti/ssasu.

  • semi blind sparse affine spectral unmixing of autofluorescence contaminated micrographs
    bioRxiv, 2019
    Co-Authors: Blair J Rossetti, Steven A Wilbert, Jessica Mark L Welch, Gary G Borisy, James G Nagy
    Abstract:

    Abstract Spectral unmixing methods attempt to determine the concentrations of different fluorophores present at each Pixel Location in an image by analyzing a set of measured emission spectra. Unmixing algorithms have shown great promise for applications where samples contain many fluorescent labels; however, existing methods perform poorly when confronted with autofluorescence-contaminated images. We propose an unmixing algorithm designed to separate fluorophores with overlapping emission spectra from contamination by autofluorescence and background fluorescence. First, we formally define a generalization of the linear mixing model, called the affine mixture model (AMM), that specifically accounts for background fluorescence. Second, we use the AMM to derive an affine nonnegative matrix factorization method for estimating fluorophore endmember spectra from reference images. Lastly, we propose a semi-blind sparse affine spectral unmixing (SSASU) algorithm that uses knowledge of the estimated endmembers to learn the autofluorescence and background fluorescence spectra on a per-image basis. When unmixing real-world spectral images contaminated by autofluorescence, SSASU greatly improved proportion indeterminacy as compared to existing methods for a given relative reconstruction error. The source code used for this paper was written in Julia and is available with the test data at https://github.com/brossetti/ssasu.