The Experts below are selected from a list of 327 Experts worldwide ranked by ideXlab platform
Stephan Weiss - One of the best experts on this subject based on the ideXlab platform.
-
ACSSC - Multichannel spectral Factorization Algorithm using polynomial matrix eigenvalue decomposition
2015 49th Asilomar Conference on Signals Systems and Computers, 2015Co-Authors: Zeliang Wang, John G. Mcwhirter, Stephan WeissAbstract:In this paper, we present a new multichannel spectral Factorization Algorithm which can be utilized to calculate the approximate spectral factor of any para-Hermitian polynomial matrix. The proposed Algorithm is based on an iterative method for polynomial matrix eigenvalue decomposition (PEVD). By using the PEVD Algorithm, the multichannel spectral Factorization problem is simply broken down to a set of single channel problems which can be solved by means of existing one-dimensional spectral Factorization Algorithms. In effect, it transforms the multichannel spectral Factorization problem into one which is much easier to solve.
-
Multichannel spectral Factorization Algorithm using polynomial matrix eigenvalue decomposition
2015 49th Asilomar Conference on Signals Systems and Computers, 2015Co-Authors: Zeliang Wang, John G. Mcwhirter, Stephan WeissAbstract:In this paper, we present a new multichannel spectral Factorization Algorithm which can be utilized to calculate the approximate spectral factor of any para-Hermitian polynomial matrix. The proposed Algorithm is based on an iterative method for polynomial matrix eigenvalue decomposition (PEVD). By using the PEVD Algorithm, the multichannel spectral Factorization problem is simply broken down to a set of single channel problems which can be solved by means of existing one-dimensional spectral Factorization Algorithms. In effect, it transforms the multichannel spectral Factorization problem into one which is much easier to solve.
Mohamed Nadif - One of the best experts on this subject based on the ideXlab platform.
-
A topographical nonnegative matrix Factorization Algorithm
The 2013 International Joint Conference on Neural Networks (IJCNN), 2013Co-Authors: Nicoleta Rogovschi, Lazhar Labiod, Mohamed NadifAbstract:We explore in this paper a novel topological organization Algorithm for data clustering and visualization named TPNMF. It leads to a clustering of the data, as well as the projection of the clusters on a two-dimensional grid while preserving the topological order of the initial data. The proposed Algorithm is based on a NMF (Nonnegative Matrix Factorization) formalism using a neighborhood function which take into account the topological order of the data. TPNMF was validated on variant real datasets and the experimental results show a good quality of the topological ordering and homogenous clustering.
Kritsanapong Somsuk - One of the best experts on this subject based on the ideXlab platform.
-
Improving fermat Factorization Algorithm by dividing modulus into three forms
Engineering and Applied Science Research, 2020Co-Authors: Kritsanapong Somsuk, Kitt TientanopajaiAbstract:Integer Factorization (IF) becomes an important issue since RSA which is the public key cryptosystem was occurred, because IF is one of the techniques for breaking RSA. Fermat’s Factorization Algorithm (FFA) is one of integer Factorization Algorithms that can factor all values of modulus. In general, FFA can factor the modulus very fast in case that both of prime factors are very close. Although many Factorization Algorithms improved from FFA were proposed, it is still time – consuming to find the prime factors. The aim of this paper is to present a new improvement of FFA in order to reduce the computation time to factor the modulus by removing some iterations of the computation. In fact, the key of the proposed Algorithm is the combination within the three techniques to check the forms of the modulus before making decision to leave some integers out from the computation. In addition, the proposed Algorithm is called Multi Forms of Modulus for Fermat Factorization Algorithm (Mn-FFA). The experimental results show that Mn-FFA can reduce the iterations of computation for all values of the modulus when it is compared with FFA and the other improved Algorithms.
-
Decreasing Size of Parameter for Computing Greatest Common Divisor to Speed up New Factorization Algorithm Based on Pollard Rho
Lecture Notes in Electrical Engineering, 2020Co-Authors: Kritsanapong SomsukAbstract:Pollard Rho is one of integer Factorization Algorithms for factoring the modulus in order to recover the private key which is the one of two keys of RSA and is kept secret. However, this Algorithm cannot finish all values of the modulus. Later, New Factorization Algorithm (NF) which is based on Pollard Rho was proposed to solve the problem of Pollard Rho that cannot finish all value of the modulus. Nevertheless, both of Pollard Rho and NF have to take time – consuming to find two large prime factors of the modulus, because they must compute the greatest common divisor for all iterations of the computation. In this paper, the method to speed up NF is presented by reducing the size of the parameter which is used to be one of two parameters to compute the greatest common divisor. The reason is, if the size of one of two parameters is reduced, the computation time for computing the greatest common divisor is also decreased. The experimental results show that the computation time of this method is decreased for all values of the modulus. Moreover, the average computation time of the proposed method for factoring the modulus is faster than NF about 6 percentages.
-
The new integer Factorization Algorithm based on Fermat’s Factorization Algorithm and Euler’s theorem
International Journal of Electrical and Computer Engineering, 2020Co-Authors: Kritsanapong SomsukAbstract:Although, Integer Factorization is one of the hard problems to break RSA, many factoring techniques are still developed. Fermat’s Factorization Algorithm (FFA) which has very high performance when prime factors are close to each other is a type of integer Factorization Algorithms. In fact, there are two ways to implement FFA. The first is called FFA-1, it is a process to find the integer from square root computing. Because this operation takes high computation cost, it consumes high computation time to find the result. The other method is called FFA-2 which is the different technique to find prime factors. Although the computation loops are quite large, there is no square root computing that included into the computation. In this paper, the new efficient Factorization Algorithm is introduced. Euler’s theorem is chosen to apply with FFA to find the addition result between two prime factors. The advantage of the proposed method is that almost of square root operations are left out from the computation while loops are not increased, they are equal to the first method. Therefore, if the proposed method is compared with the FFA-1, it implies that the computation time is decreased, because there is no the square root operation and the loops are same. On the other hand, the loops of the proposed method are less than the second method. Therefore, time is also reduced. Furthermore, the proposed method can be also selected to apply with many methods which are modified from FFA to decrease more cost.
-
The improvement of initial value closer to the target for Fermat’s Factorization Algorithm
Journal of Discrete Mathematical Sciences and Cryptography, 2018Co-Authors: Kritsanapong SomsukAbstract:AbstractInteger Factorization Algorithm is one of the hard problems for breaking RSA. Fermat’s Factorization Algorithm (FFA) factoring the modulus very fast whenever the difference between two larg...
-
the improvement of initial value closer to the target for fermat s Factorization Algorithm
Journal of Discrete Mathematical Sciences and Cryptography, 2018Co-Authors: Kritsanapong SomsukAbstract:AbstractInteger Factorization Algorithm is one of the hard problems for breaking RSA. Fermat’s Factorization Algorithm (FFA) factoring the modulus very fast whenever the difference between two larg...
Zhenyu He - One of the best experts on this subject based on the ideXlab platform.
-
A Modified Non-negative Matrix Factorization Algorithm for Face Recognition
18th International Conference on Pattern Recognition (ICPR'06), 2006Co-Authors: Chong Sze Tong, Wen-sheng Chen, Weipeng Zhang, Zhenyu HeAbstract:In this paper, we propose a new variation of the non-negative matrix Factorization (NMF) for face recognition. The original NMF Algorithm is distinguished from the other methods of pattern recognition by its non-negativity constraints which lead to a parts-based representation because they allow only additive combinations. However, it should be considered as an unsupervised method since class information in the training set is not used. To take advantage of more information in the training images and improve the performance for classification problem, we integrate the Fisher linear discriminant analysis into the NMF Algorithm, which results in a novel modified non-negative matrix Factorization Algorithm. Our new update rule guarantees the non-negativity for all the coefficients and hence preserve the intuitive meaning for the base vectors and weight vectors while facilitating the supervised learning of within-class information. Our new technique is tested on a well-known face database: the ORL Face Database. The experimental results are very encouraging and outperformed traditional techniques including the original NMF and the eigenface method
Rongteng Wu - One of the best experts on this subject based on the ideXlab platform.
-
Two-Stage Column Block Parallel LU Factorization Algorithm
IEEE Access, 2020Co-Authors: Rongteng WuAbstract:Parallel computing is increasingly important in computer architectures, parallel architecture has become ubiquitous in our everyday lives. Novel architectures and programming models pose new challenges to Algorithm design and system software development. This paper presents a two-stage column block parallel LU Factorization Algorithm for multiple-processor architectures. Any given matrix is first partitioned into large blocks, and then, every large block is partitioned into a number of small blocks according to the number of processors. Finally, the small column blocks are allocated to processors in an orderly “serpentine arrangement.” Each iteration of the column block parallel LU Factorization is separated into two stages of operation. In the first stage, the first-step Factorization operation is processed in advance and nonblocking communication is used to reduce the processor idle and waiting time and improve parallelism. In the second stage, the large blocks are used to satisfy more powerful processors, such as GPUs, which require more data to exploit their computing capabilities. Experiments are conducted on a multicore system and multi-GPU system with different configurations to test the Algorithm's performance. Compared with other related column block parallel LU Factorizations, the two-stage Algorithm exhibits better load balancing and parallel execution time performance.
-
A Heterogeneous Parallel Cholesky Block Factorization Algorithm
IEEE Access, 2018Co-Authors: Rongteng WuAbstract:As an essential part of current mainstream computing systems, GPUs are not only powerful graphics engines but also highly parallel programmable processors. Collaboration between CPUs and GPUs is required to obtain high computing performance in multi-CPU and multi-GPU heterogeneous systems. It is challenging to develop new parallel Algorithms on heterogeneous architectures with multiple CPUs and multiple GPUs for such purposes as communication, load balancing, memory spaces, and synchronization We present a parallel Cholesky block Factorization Algorithm for heterogeneous multi-CPU and multi-GPU architectures. First, a matrix is partitioned into different-sized blocks based on with the performance of the CPU and GPU. Then, a one-dimensional row block-cyclic distribution strategy is used to allocate row block data to every CPU and GPU to minimize communication. The computing task related to the definite row block will then be executed by the corresponding CPU or GPU. Experiments on a system with two CPUs and eight GPUs show good load balancing, parallelism, communication cost, and scalability.