Efficient Algorithm

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 526812 Experts worldwide ranked by ideXlab platform

James G. Pipe - One of the best experts on this subject based on the ideXlab platform.

  • convolution kernel design and Efficient Algorithm for sampling density correction
    Magnetic Resonance in Medicine, 2009
    Co-Authors: Kenneth O Johnson, James G. Pipe
    Abstract:

    Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally Efficient Algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the Algorithm are extended to 3D. Magn Reson Med 61:439–447, 2009. © 2009 Wiley-Liss, Inc.

  • Convolution kernel design and Efficient Algorithm for sampling density correction.
    Magnetic resonance in medicine, 2009
    Co-Authors: Kenneth O Johnson, James G. Pipe
    Abstract:

    Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally Efficient Algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the Algorithm are extended to 3D.

Yaguang Yang - One of the best experts on this subject based on the ideXlab platform.

Jean-charles Faugère - One of the best experts on this subject based on the ideXlab platform.

  • a new Efficient Algorithm for computing grobner bases without reduction to zero f 5
    International Symposium on Symbolic and Algebraic Computation, 2002
    Co-Authors: Jean-charles Faugère
    Abstract:

    This paper introduces a new Efficient Algorithm for computing Grobner bases. We replace the Buchberger criteria by an optimal criteria. We give a proof that the resulting Algorithm (called F 5 ) generates no useless critical pairs if the input is a regular sequence. This a new result by itself but a first implementation of the Algorithm F 5 shows that it is also very Efficient in practice: for instance previously untractable problems can be solved (cyclic 10). In practice for most examples there is no reduction to zero. We illustrate this Algorithm by one detailed example.

  • a new Efficient Algorithm for computing grobner bases without reduction to zero f5
    International Symposium on Symbolic and Algebraic Computation, 2002
    Co-Authors: Jean-charles Faugère
    Abstract:

    This paper introduces a new Efficient Algorithm for computing Grobner bases. We replace the Buchberger criteria by an optimal criteria. We give a proof that the resulting Algorithm (called F5) generates no useless critical pairs if the input is a regular sequence. This a new result by itself but a first implementation of the Algorithm F5 shows that it is also very Efficient in practice: for instance previously untractable problems can be solved (cyclic 10). In practice for most examples there is no reduction to zero. We illustrate this Algorithm by one detailed example.

  • A new Efficient Algorithm for computing Gröbner bases (F4)
    Journal of Pure and Applied Algebra, 1999
    Co-Authors: Jean-charles Faugère
    Abstract:

    This paper introduces a new Efficient Algorithm for computing Gröbner bases. To avoid as much intermediate computation as possible, the Algorithm computes successive truncated Gröbner bases and it replaces the classical polynomial reduction found in the Buchberger Algorithm by the simultaneous reduction of several polynomials. This powerful reduction mechanism is achieved by means of a symbolic precomputation and by extensive use of sparse linear algebra methods. Current techniques in linear algebra used in Computer Algebra are reviewed together with other methods coming from the numerical field. Some previously untractable problems (Cyclic 9) are presented as well as an empirical comparison of a first implementation of this Algorithm with other well known programs. This comparison pays careful attention to methodology issues. All the benchmarks and CPU times used in this paper are frequently updated and available on a Web page. Even though the new Algorithm does not improve the worst case complexity it is several times faster than previous implementations both for integers and modulo p computations.

Kenneth O Johnson - One of the best experts on this subject based on the ideXlab platform.

  • convolution kernel design and Efficient Algorithm for sampling density correction
    Magnetic Resonance in Medicine, 2009
    Co-Authors: Kenneth O Johnson, James G. Pipe
    Abstract:

    Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally Efficient Algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the Algorithm are extended to 3D. Magn Reson Med 61:439–447, 2009. © 2009 Wiley-Liss, Inc.

  • Convolution kernel design and Efficient Algorithm for sampling density correction.
    Magnetic resonance in medicine, 2009
    Co-Authors: Kenneth O Johnson, James G. Pipe
    Abstract:

    Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally Efficient Algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the Algorithm are extended to 3D.

P Kanagasabapathy - One of the best experts on this subject based on the ideXlab platform.

  • fast and Efficient Algorithm to remove gaussian noise in digital images
    2010
    Co-Authors: V R Vijaykumar, P T Vanathi, P Kanagasabapathy
    Abstract:

    In this paper, a new fast and Efficient Algorithm capable in removing Gaussian noise with less computational complexity is presented. The Algorithm initially estimates the amount of noise corruption from the noise corrupted image. In the second stage, the center pixel is replaced by the mean value of the some of the surrounding pixels based on a threshold value. Noise removing with edge preservation and computational complexity are two conflicting parameters. The proposed method is an optimum solution for these requirements. The performance of the Algorithm is tested and compared with standard mean filter, wiener filter, alpha trimmed mean filter K- means filter, bilateral filter and recently proposed trilateral filter. Experimental results show the superior performance of the proposed filtering Algorithm compared to the other standard Algorithms in terms of both subjective and objective evaluations. The proposed method removes Gaussian noise and the edges are better preserved with less computational complexity and this aspect makes it easy to implement in hardware.

  • adaptive window based Efficient Algorithm for removing gaussian noise in gray scale and color images
    Computational Intelligence, 2007
    Co-Authors: V R Vijaykumar, P T Vanathi, P Kanagasabapathy
    Abstract:

    In this paper a new Efficient Algorithm for the removal of Gaussian noise in gray scale and color images using adaptive window is presented. The function of the Algorithm is to replace each corrupted pixel by a mean value of the pixels inside an adaptive window. The adaptive window is formed using a threshold calculated form noise variance. The proposed Algorithm is simple and it works very effectively in removing Gaussian noise compare with the other techniques. The proposed Algorithm is tested for both gray scale and color images corrupted with Gaussian noise. The visual and quantitative results show that the proposed Algorithm performs well in removing Gaussian noise and preserve edge details.