Iteration Step

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 24993 Experts worldwide ranked by ideXlab platform

Gabriel Oksa - One of the best experts on this subject based on the ideXlab platform.

  • PPAM (1) - Parallel One–Sided Jacobi SVD Algorithm with Variable Blocking Factor
    Parallel Processing and Applied Mathematics, 2014
    Co-Authors: Martin Becka, Gabriel Oksa
    Abstract:

    Parallel one-sided block-Jacobi algorithm for the matrix singular value decomposition (SVD) requires an efficient computation of symmetric Gram matrices, their eigenvalue decompositions (EVDs) and an update of matrix columns and right singular vectors by matrix multiplication. In our recent parallel implementation with \(p\) processors and blocking factor \(\ell =2p\), these tasks are computed serially in each processor in a given parallel Iteration Step because each processor contains exactly two block columns of an input matrix \(A\). However, as shown in our previous work, with increasing \(p\) (hence, with increasing blocking factor) the number of parallel Iteration Steps needed for the convergence of the whole algorithm increases linearly but faster than proportionally to \(p\), so that it is hard to achieve a good speedup. We propose to break the tight relation \(\ell =2p\) and to use a small blocking factor \(\ell = p/k\) for some integer \(k\) that divides \(p\), \(\ell \) even. The algorithm then works with pairs of logical block columns that are distributed among processors so that all computations inside a parallel Iteration Step are themselves parallel. We discuss the optimal data distribution for parallel subproblems in the one-sided block-Jacobi algorithm and analyze its computational and communication complexity. Experimental results with full matrices of order \(8192\) show that our new algorithm with a small blocking factor is well scalable and can be \(2\)–\(3\) times faster than the ScaLAPACK procedure PDGESVD.

  • parallel one sided jacobi svd algorithm with variable blocking factor
    International Conference on Parallel Processing, 2013
    Co-Authors: Martin Becka, Gabriel Oksa
    Abstract:

    Parallel one-sided block-Jacobi algorithm for the matrix singular value decomposition (SVD) requires an efficient computation of symmetric Gram matrices, their eigenvalue decompositions (EVDs) and an update of matrix columns and right singular vectors by matrix multiplication. In our recent parallel implementation with \(p\) processors and blocking factor \(\ell =2p\), these tasks are computed serially in each processor in a given parallel Iteration Step because each processor contains exactly two block columns of an input matrix \(A\). However, as shown in our previous work, with increasing \(p\) (hence, with increasing blocking factor) the number of parallel Iteration Steps needed for the convergence of the whole algorithm increases linearly but faster than proportionally to \(p\), so that it is hard to achieve a good speedup. We propose to break the tight relation \(\ell =2p\) and to use a small blocking factor \(\ell = p/k\) for some integer \(k\) that divides \(p\), \(\ell \) even. The algorithm then works with pairs of logical block columns that are distributed among processors so that all computations inside a parallel Iteration Step are themselves parallel. We discuss the optimal data distribution for parallel subproblems in the one-sided block-Jacobi algorithm and analyze its computational and communication complexity. Experimental results with full matrices of order \(8192\) show that our new algorithm with a small blocking factor is well scalable and can be \(2\)–\(3\) times faster than the ScaLAPACK procedure PDGESVD.

Martin Becka - One of the best experts on this subject based on the ideXlab platform.

  • PPAM (1) - Parallel One–Sided Jacobi SVD Algorithm with Variable Blocking Factor
    Parallel Processing and Applied Mathematics, 2014
    Co-Authors: Martin Becka, Gabriel Oksa
    Abstract:

    Parallel one-sided block-Jacobi algorithm for the matrix singular value decomposition (SVD) requires an efficient computation of symmetric Gram matrices, their eigenvalue decompositions (EVDs) and an update of matrix columns and right singular vectors by matrix multiplication. In our recent parallel implementation with \(p\) processors and blocking factor \(\ell =2p\), these tasks are computed serially in each processor in a given parallel Iteration Step because each processor contains exactly two block columns of an input matrix \(A\). However, as shown in our previous work, with increasing \(p\) (hence, with increasing blocking factor) the number of parallel Iteration Steps needed for the convergence of the whole algorithm increases linearly but faster than proportionally to \(p\), so that it is hard to achieve a good speedup. We propose to break the tight relation \(\ell =2p\) and to use a small blocking factor \(\ell = p/k\) for some integer \(k\) that divides \(p\), \(\ell \) even. The algorithm then works with pairs of logical block columns that are distributed among processors so that all computations inside a parallel Iteration Step are themselves parallel. We discuss the optimal data distribution for parallel subproblems in the one-sided block-Jacobi algorithm and analyze its computational and communication complexity. Experimental results with full matrices of order \(8192\) show that our new algorithm with a small blocking factor is well scalable and can be \(2\)–\(3\) times faster than the ScaLAPACK procedure PDGESVD.

  • parallel one sided jacobi svd algorithm with variable blocking factor
    International Conference on Parallel Processing, 2013
    Co-Authors: Martin Becka, Gabriel Oksa
    Abstract:

    Parallel one-sided block-Jacobi algorithm for the matrix singular value decomposition (SVD) requires an efficient computation of symmetric Gram matrices, their eigenvalue decompositions (EVDs) and an update of matrix columns and right singular vectors by matrix multiplication. In our recent parallel implementation with \(p\) processors and blocking factor \(\ell =2p\), these tasks are computed serially in each processor in a given parallel Iteration Step because each processor contains exactly two block columns of an input matrix \(A\). However, as shown in our previous work, with increasing \(p\) (hence, with increasing blocking factor) the number of parallel Iteration Steps needed for the convergence of the whole algorithm increases linearly but faster than proportionally to \(p\), so that it is hard to achieve a good speedup. We propose to break the tight relation \(\ell =2p\) and to use a small blocking factor \(\ell = p/k\) for some integer \(k\) that divides \(p\), \(\ell \) even. The algorithm then works with pairs of logical block columns that are distributed among processors so that all computations inside a parallel Iteration Step are themselves parallel. We discuss the optimal data distribution for parallel subproblems in the one-sided block-Jacobi algorithm and analyze its computational and communication complexity. Experimental results with full matrices of order \(8192\) show that our new algorithm with a small blocking factor is well scalable and can be \(2\)–\(3\) times faster than the ScaLAPACK procedure PDGESVD.

Avi Ostfeld - One of the best experts on this subject based on the ideXlab platform.

  • iterative linearization scheme for convex nonlinear equations application to optimal operation of water distribution systems
    Journal of Water Resources Planning and Management, 2013
    Co-Authors: Eyal Price, Avi Ostfeld
    Abstract:

    Convex equations exist in different fields of research. As an example are the Hazen-Williams or Darcy-Weisbach head-loss formulas and chlorine decay in water supply systems. Pure linear programming (LP) cannot be directly applied to these equations and heuristic techniques must be used. This study presents a methodology for linearization of increasing or decreasing convex nonlinear equations and their incorporation into LP optimization models. The algorithm is demonstrated on the Hazen-Williams head-loss equation combined with a LP optimal operation water supply model. The Hazen-Williams equation is linearized between two points along the nonlinear flow curve. The first point is a fixed point optimally located in the expected flow domain according to maximum flow rate expected in the pipe (estimated through maximum flow velocities and pipe diameter). The second point is the calculated flow rate in the pipe resulting from the previous Iteration Step solution. In each Iteration Step, the linear coefficients are altered according to the previous Step's flow rate result and the fixed point. The solution gradually converges closer to the nonlinear head-loss equation results. The iterative process stops once both an optimal solution is attained and a satisfactory approximation is received. The methodology is demonstrated using simple and complex example applications. DOI: 10.1061/(ASCE)WR.1943-5452.0000275. © 2013 American Society of Civil Engineers. CE Database subject headings: Optimization; Water distribution systems; Chlorine; Water supply. Author keywords: Convex; Optimization; Water distribution systems; Optimal operation; Successive linearization; Head loss.

N. V. Kupryaev - One of the best experts on this subject based on the ideXlab platform.

Feng Zhao - One of the best experts on this subject based on the ideXlab platform.

  • optimal selection based suppressed fuzzy c means clustering algorithm with self tuning non local spatial information for image segmentation
    Expert Systems With Applications, 2014
    Co-Authors: Feng Zhao
    Abstract:

    Suppressed fuzzy c-means clustering algorithm (S-FCM) is one of the most effective fuzzy clustering algorithms. Even if S-FCM has some advantages, some problems exist. First, it is unreasonable to compulsively modify the membership degree values for all the data points in each Iteration Step of S-FCM. Furthermore, duo to only utilizing the spatial information derived from the pixel's neighborhood window to guide the process of image segmentation, S-FCM cannot obtain satisfactory segmentation results on images heavily corrupted by noise. This paper proposes an optimal-selection-based suppressed fuzzy c-means clustering algorithm with self-tuning non local spatial information for image segmentation to solve the above drawbacks of S-FCM. Firstly, an optimal-selection-based suppressed strategy is presented to modify the membership degree values for data points. In detail, during each Iteration Step, all the data points are ranked based on their biggest membership degree values, and then the membership degree values of the top r ranked data points are modified while the membership degree values of the other data points are not changed. In this paper, the parameter r is determined by the golden section method. Secondly, a novel gray level histogram is constructed by using the self-tuning non local spatial information for each pixel, and then fuzzy c-means clustering algorithm with the optimal-selection-based suppressed strategy is executed on this histogram. The self-tuning non local spatial information of a pixel is derived from the pixels with a similar neighborhood configuration to the given pixel and can preserve more information of the image than the spatial information derived from the pixel's neighborhood window. This method is applied to Berkeley and other real images heavily contaminated by noise. The image segmentation experiments demonstrate the superiority of the proposed method over other fuzzy algorithms.