The Experts below are selected from a list of 324 Experts worldwide ranked by ideXlab platform
Daniel Menard - One of the best experts on this subject based on the ideXlab platform.
-
New evaluation scheme for software function approximation with non-uniform segmentation
2016Co-Authors: Justine Bonnot, Erwan Nogues, Daniel MenardAbstract:—Modern applications embed complex mathematical processing based on composition of elementary functions. A good balance between approximation accuracy, and implementation cost, i.e. memory Space Requirement and computation time, is needed to design an efficient implementation. From this point of view, approaches working with polynomial approximation obtain results of a monitored accuracy with a moderate implementation cost. For software implementation in fixed-point processors, accurate results can be obtained if the segment on which the function is computed I is segmented accurately enough, to have an approximating polynomial on each segment. Non-uniform segmentation is required to limit the number of segments and then the implementation cost. The proposed recursive scheme exploits the trade-off between memory Requirement and evaluation time. The method is illustrated with the function exp(− (x)) on the segment [2 −6 ; 2 5 ] and showed a mean speed-up ratio of 98.7 compared to math.h on the Digital Signal Processor C55x.
-
New Non-Uniform Segmentation Technique for Software Function Evaluation
2016Co-Authors: Justine Bonnot, Erwan Nogues, Daniel MenardAbstract:Embedded applications use more and more sophisticated computatious. These computations can integrate composition of elementary functions and can easily be approximated by polynomials. Indeed, polynomial approximation methods allow to find a trade-off between accuracy and computation time. Software implementation of polynomial approximation in fixed-point processors is considered in this paper. To obtain a moderate approximation error, segmentation of the interval I on which the function is computed is necessary. This paper proposes a new method to compute the values of a function on I using non-uniform segmentation and polynomial approximation. Non-uniform segmentation allows one to minimize the number of segments created and is modeled by a tree-structure. The specifications of the segmentation set the balance between memory Space Requirement and computation time. Besides, compared to table-based methods or the CORDIC algorithm, our approach significantly reduces, the memory size and the function evaluation time respectively.
-
New Type of Non-Uniform Segmentation for Software Function Evaluation
2016Co-Authors: Justine Bonnot, Daniel Menard, Erwan NoguesAbstract:Embedded applications integrate more and more sophisticated computations. These computations are generally a composition of elementary functions and can easily be approximated by polynomials. Indeed, polynomial approximation methods allow to find a trade-off between accuracy and computation time. Software implementation of polynomial approximation in fixed-point processors is considered. To obtain a moderate approximation error, segmentation of the interval I on which the function is computed, is necessary. This paper presents a method to compute the values of a function on I using non-uniform segmentation, and polynomial approximation. Non-uniform segmentation allows to minimize the number of segments created and is modeled by a tree-structure. The specifications of the segmentation set the balance between memory Space Requirement and computation time. The method is illustrated with the function (− log(x)) on the interval [2 −5 ; 2 0 ] and showed a speed-up mean of 97.7 compared to the use of the library libm on the Digital Signal Processor C55x.
-
ASAP - New non-uniform segmentation technique for software function evaluation
2016 IEEE 27th International Conference on Application-specific Systems Architectures and Processors (ASAP), 2016Co-Authors: Justine Bonnot, Erwan Nogues, Daniel MenardAbstract:Embedded applications use more and more sophisticated computations. These computations can integrate composition of elementary functions and can easily be approximated by polynomials. Indeed, polynomial approximation methods allow to find a trade-off between accuracy and computation time. Software implementation of polynomial approximation in fixed-point processors is considered in this paper. To obtain a moderate approximation error, segmentation of the interval I on which the function is computed is necessary. This paper proposes a new method to compute the values of a function on I using non-uniform segmentation and polynomial approximation. Non-uniform segmentation allows one to minimize the number of segments created and is modeled by a tree-structure. The specifications of the segmentation set the balance between memory Space Requirement and computation time. Besides, compared to table-based methods or the CORDIC algorithm, our approach significantly reduces, the memory size and the function evaluation time respectively.
-
EUSIPCO - New evaluation scheme for software function approximation with non-uniform segmentation
2016 24th European Signal Processing Conference (EUSIPCO), 2016Co-Authors: Justine Bonnot, Erwan Nogues, Daniel MenardAbstract:Modern applications embed complex mathematical processing based on composition of elementary functions. A good balance between approximation accuracy, and implementation cost, i.e. memory Space Requirement and computation time, is needed to design an efficient implementation. From this point of view, approaches working with polynomial approximation obtain results of a monitored accuracy with a moderate implementation cost. For software implementation in fixed-point processors, accurate results can be obtained if the segment on which the function is computed I is segmented accurately enough, to have an approximating polynomial on each segment. Non-uniform segmentation is required to limit the number of segments and then the implementation cost. The proposed recursive scheme exploits the trade-off between memory Requirement and evaluation time. The method is illustrated with the function exp(−√(x)) on the segment [2−6; 25] and showed a mean speed-up ratio of 98.7 compared to the mathematical C standard library on the Digital Signal Processor C55x.
Kap Hwan Kim - One of the best experts on this subject based on the ideXlab platform.
-
Estimating the Space Requirement for outbound container inventories in port container terminals
International Journal of Production Economics, 2011Co-Authors: Youn Ju Woo, Kap Hwan KimAbstract:This paper proposes a method for allocating storage Space to groups of outbound containers in port container terminals. For this allocation, a collection of adjacent stacks is reserved for each group of containers with the same attributes. The impacts of various Space-reservation strategies on the productivity of the loading operation for outbound containers are discussed. A method is suggested for determining the size of the Space Requirement for outbound container yards.
-
the optimal determination of the Space Requirement and the number of transfer cranes for import containers
Annual Conference on Computers, 1998Co-Authors: Kap Hwan Kim, Hong Bae KimAbstract:It is discussed how to determine the optimal amount of storage Space and the optimal number of transfer cranes for import containers. A cost model is developed for the decision making. It includes the Space cost, the fixed cost of transfer cranes which corresponds to the investment cost, the variable cost of transfer cranes and outside trucks which is related to the time spent for the transfer of containers. A simple solution procedure for the optimal solution is provided. The solution procedure is illustrated using a numerical example.
-
A joint determination of storage locations and Space Requirements for correlated items in a miniload automated storage-retrieval system
International Journal of Production Research, 1993Co-Authors: Kap Hwan KimAbstract:Abstract We study the problem of clustering inventory items to assigned storage locations. Inventory-related cost as well as material-handling cost is considered to determine the Space Requirement and the storage location of each item simultaneously. An improvement heuristic algorithm is developed for the problem. We provide a numerical example to illustrate the algorithm developed. The performance of the algorithm is evaluated through numerical experimentation.
Justine Bonnot - One of the best experts on this subject based on the ideXlab platform.
-
New evaluation scheme for software function approximation with non-uniform segmentation
2016Co-Authors: Justine Bonnot, Erwan Nogues, Daniel MenardAbstract:—Modern applications embed complex mathematical processing based on composition of elementary functions. A good balance between approximation accuracy, and implementation cost, i.e. memory Space Requirement and computation time, is needed to design an efficient implementation. From this point of view, approaches working with polynomial approximation obtain results of a monitored accuracy with a moderate implementation cost. For software implementation in fixed-point processors, accurate results can be obtained if the segment on which the function is computed I is segmented accurately enough, to have an approximating polynomial on each segment. Non-uniform segmentation is required to limit the number of segments and then the implementation cost. The proposed recursive scheme exploits the trade-off between memory Requirement and evaluation time. The method is illustrated with the function exp(− (x)) on the segment [2 −6 ; 2 5 ] and showed a mean speed-up ratio of 98.7 compared to math.h on the Digital Signal Processor C55x.
-
New Non-Uniform Segmentation Technique for Software Function Evaluation
2016Co-Authors: Justine Bonnot, Erwan Nogues, Daniel MenardAbstract:Embedded applications use more and more sophisticated computatious. These computations can integrate composition of elementary functions and can easily be approximated by polynomials. Indeed, polynomial approximation methods allow to find a trade-off between accuracy and computation time. Software implementation of polynomial approximation in fixed-point processors is considered in this paper. To obtain a moderate approximation error, segmentation of the interval I on which the function is computed is necessary. This paper proposes a new method to compute the values of a function on I using non-uniform segmentation and polynomial approximation. Non-uniform segmentation allows one to minimize the number of segments created and is modeled by a tree-structure. The specifications of the segmentation set the balance between memory Space Requirement and computation time. Besides, compared to table-based methods or the CORDIC algorithm, our approach significantly reduces, the memory size and the function evaluation time respectively.
-
New Type of Non-Uniform Segmentation for Software Function Evaluation
2016Co-Authors: Justine Bonnot, Daniel Menard, Erwan NoguesAbstract:Embedded applications integrate more and more sophisticated computations. These computations are generally a composition of elementary functions and can easily be approximated by polynomials. Indeed, polynomial approximation methods allow to find a trade-off between accuracy and computation time. Software implementation of polynomial approximation in fixed-point processors is considered. To obtain a moderate approximation error, segmentation of the interval I on which the function is computed, is necessary. This paper presents a method to compute the values of a function on I using non-uniform segmentation, and polynomial approximation. Non-uniform segmentation allows to minimize the number of segments created and is modeled by a tree-structure. The specifications of the segmentation set the balance between memory Space Requirement and computation time. The method is illustrated with the function (− log(x)) on the interval [2 −5 ; 2 0 ] and showed a speed-up mean of 97.7 compared to the use of the library libm on the Digital Signal Processor C55x.
-
ASAP - New non-uniform segmentation technique for software function evaluation
2016 IEEE 27th International Conference on Application-specific Systems Architectures and Processors (ASAP), 2016Co-Authors: Justine Bonnot, Erwan Nogues, Daniel MenardAbstract:Embedded applications use more and more sophisticated computations. These computations can integrate composition of elementary functions and can easily be approximated by polynomials. Indeed, polynomial approximation methods allow to find a trade-off between accuracy and computation time. Software implementation of polynomial approximation in fixed-point processors is considered in this paper. To obtain a moderate approximation error, segmentation of the interval I on which the function is computed is necessary. This paper proposes a new method to compute the values of a function on I using non-uniform segmentation and polynomial approximation. Non-uniform segmentation allows one to minimize the number of segments created and is modeled by a tree-structure. The specifications of the segmentation set the balance between memory Space Requirement and computation time. Besides, compared to table-based methods or the CORDIC algorithm, our approach significantly reduces, the memory size and the function evaluation time respectively.
-
EUSIPCO - New evaluation scheme for software function approximation with non-uniform segmentation
2016 24th European Signal Processing Conference (EUSIPCO), 2016Co-Authors: Justine Bonnot, Erwan Nogues, Daniel MenardAbstract:Modern applications embed complex mathematical processing based on composition of elementary functions. A good balance between approximation accuracy, and implementation cost, i.e. memory Space Requirement and computation time, is needed to design an efficient implementation. From this point of view, approaches working with polynomial approximation obtain results of a monitored accuracy with a moderate implementation cost. For software implementation in fixed-point processors, accurate results can be obtained if the segment on which the function is computed I is segmented accurately enough, to have an approximating polynomial on each segment. Non-uniform segmentation is required to limit the number of segments and then the implementation cost. The proposed recursive scheme exploits the trade-off between memory Requirement and evaluation time. The method is illustrated with the function exp(−√(x)) on the segment [2−6; 25] and showed a mean speed-up ratio of 98.7 compared to the mathematical C standard library on the Digital Signal Processor C55x.
Gonzalo Navarro - One of the best experts on this subject based on the ideXlab platform.
-
practical approaches to reduce the Space Requirement of lempel ziv based compressed text indices
ACM Journal of Experimental Algorithms, 2010Co-Authors: Diego Arroyuelo, Gonzalo NavarroAbstract:Given a text T[1Dn] over an alphabet of size σ, the full-text search problem consists in locating the occ occurrences of a given pattern P[1Dm] in T. Compressed full-text self-indices are Space-efficient representations of the text that provide direct access to and indexed search on it.The LZ-index of Navarro is a compressed full-text self-index based on the LZ78 compression algorithm. This index requires about 5 times the size of the compressed text (in theory, 4nHk(T)+o(nlogσ) bits of Space, where Hk(T) is the k-th order empirical entropy of T). In practice, the average locating complexity of the LZ-index is O(σ m logσn + occ σm/2), where occ is the number of occurrences of P. It can extract text substrings of length e in O(e) time. This index outperforms competing schemes both to locate short patterns and to extract text snippets. However, the LZ-index can be up to 4 times larger than the smallest existing indices (which use nHk(T)+o(nlogσ) bits in theory), and it does not offer Space/time tuning options. This limits its applicability.In this article, we study practical ways to reduce the Space of the LZ-index. We obtain new LZ-index variants that require 2(1+ϵ)nHk(T) + o(nlogσ) bits of Space, for any 0<ϵ <1. They have an average locating time of O(1/ϵ(mlog n + occ σm/2)), while extracting takes O(e) time.We perform extensive experimentation and conclude that our schemes are able to reduce the Space of the original LZ-index by a factor of 2/3, that is, around 3 times the compressed text size. Our schemes are able to extract about 1 to 2 MB of the text per second, being twice as fast as the most competitive alternatives. Pattern occurrences are located at a rate of up to 1 to 4 million per second. This constitutes the best Space/time trade-off when indices are allowed to use 4 times the size of the compressed text or more.
-
reducing the Space Requirement of lz index
Combinatorial Pattern Matching, 2006Co-Authors: Diego Arroyuelo, Gonzalo Navarro, Kunihiko SadakaneAbstract:The LZ-index is a compressed full-text self-index able to represent a text P1...m, over an alphabet of size $\sigma = O(\textrm{polylog}(u))$ and with k-th order empirical entropy Hk(T), using 4uHk(T) + o(ulogσ) bits for any k = o(logσu). It can report all the occ occurrences of a pattern P1...m in T in O(m3logσ + (m + occ)logu) worst case time. Its main drawback is the factor 4 in its Space complexity, which makes it larger than other state-of-the-art alternatives. In this paper we present two different approaches to reduce the Space Requirement of LZ-index. In both cases we achieve (2 + e)uHk(T) + o(ulogσ) bits of Space, for any constant e> 0, and we simultaneously improve the search time to O(m2logm + (m + occ)logu). Both indexes support displaying any subtext of length l in optimal O(l/logσu) time. In addition, we show how the Space can be squeezed to (1 + e)uHk(T) + o(ulogσ) to obtain a structure with O(m2) average search time for $m \geqslant 2\log_\sigma{u}$.
-
Reducing the Space Requirement of LZ-index
Lecture Notes in Computer Science, 2006Co-Authors: Diego Arroyuelo, Gonzalo Navarro, Kunihiko SadakaneAbstract:The LZ-index is a compressed full-text self-index able to represent a text T 1...u , over an alphabet of size a = O(polylog(u)) and with k-th order empirical entropy H k (T), using 4uH k (T) + o(u log σ) bits for any k = o(log σ u). It can report all the occ occurrences of a pattern P 1...m in T in O(m3 log σ + (m + occ) log u) worst case time. Its main drawback is the factor 4 in its Space complexity, which makes it larger than other state-of-the-art alternatives. In this paper we present two different approaches to reduce the Space Requirement of LZ-index. In both cases we achieve (2 + e)uH k (T) + o(u log σ) bits of Space, for any constant e > 0, and we simultaneously improve the search time to O(m 2 log m + (m + occ) log u). Both indexes support displaying any sub-text of length l in optimal O(l/logσ u) time. In addition, we show how the Space can be squeezed to (1 + ∈)uH k (T) + o(u log σ) to obtain a structure with O(m 2 ) average search time for m ≥ 2log σ , u.
-
CPM - Reducing the Space Requirement of LZ-Index
Combinatorial Pattern Matching, 2006Co-Authors: Diego Arroyuelo, Gonzalo Navarro, Kunihiko SadakaneAbstract:The LZ-index is a compressed full-text self-index able to represent a text P1...m, over an alphabet of size $\sigma = O(\textrm{polylog}(u))$ and with k-th order empirical entropy Hk(T), using 4uHk(T) + o(ulogσ) bits for any k = o(logσu). It can report all the occ occurrences of a pattern P1...m in T in O(m3logσ + (m + occ)logu) worst case time. Its main drawback is the factor 4 in its Space complexity, which makes it larger than other state-of-the-art alternatives. In this paper we present two different approaches to reduce the Space Requirement of LZ-index. In both cases we achieve (2 + e)uHk(T) + o(ulogσ) bits of Space, for any constant e> 0, and we simultaneously improve the search time to O(m2logm + (m + occ)logu). Both indexes support displaying any subtext of length l in optimal O(l/logσu) time. In addition, we show how the Space can be squeezed to (1 + e)uHk(T) + o(ulogσ) to obtain a structure with O(m2) average search time for $m \geqslant 2\log_\sigma{u}$.
Kunihiko Sadakane - One of the best experts on this subject based on the ideXlab platform.
-
reducing the Space Requirement of lz index
Combinatorial Pattern Matching, 2006Co-Authors: Diego Arroyuelo, Gonzalo Navarro, Kunihiko SadakaneAbstract:The LZ-index is a compressed full-text self-index able to represent a text P1...m, over an alphabet of size $\sigma = O(\textrm{polylog}(u))$ and with k-th order empirical entropy Hk(T), using 4uHk(T) + o(ulogσ) bits for any k = o(logσu). It can report all the occ occurrences of a pattern P1...m in T in O(m3logσ + (m + occ)logu) worst case time. Its main drawback is the factor 4 in its Space complexity, which makes it larger than other state-of-the-art alternatives. In this paper we present two different approaches to reduce the Space Requirement of LZ-index. In both cases we achieve (2 + e)uHk(T) + o(ulogσ) bits of Space, for any constant e> 0, and we simultaneously improve the search time to O(m2logm + (m + occ)logu). Both indexes support displaying any subtext of length l in optimal O(l/logσu) time. In addition, we show how the Space can be squeezed to (1 + e)uHk(T) + o(ulogσ) to obtain a structure with O(m2) average search time for $m \geqslant 2\log_\sigma{u}$.
-
Reducing the Space Requirement of LZ-index
Lecture Notes in Computer Science, 2006Co-Authors: Diego Arroyuelo, Gonzalo Navarro, Kunihiko SadakaneAbstract:The LZ-index is a compressed full-text self-index able to represent a text T 1...u , over an alphabet of size a = O(polylog(u)) and with k-th order empirical entropy H k (T), using 4uH k (T) + o(u log σ) bits for any k = o(log σ u). It can report all the occ occurrences of a pattern P 1...m in T in O(m3 log σ + (m + occ) log u) worst case time. Its main drawback is the factor 4 in its Space complexity, which makes it larger than other state-of-the-art alternatives. In this paper we present two different approaches to reduce the Space Requirement of LZ-index. In both cases we achieve (2 + e)uH k (T) + o(u log σ) bits of Space, for any constant e > 0, and we simultaneously improve the search time to O(m 2 log m + (m + occ) log u). Both indexes support displaying any sub-text of length l in optimal O(l/logσ u) time. In addition, we show how the Space can be squeezed to (1 + ∈)uH k (T) + o(u log σ) to obtain a structure with O(m 2 ) average search time for m ≥ 2log σ , u.
-
CPM - Reducing the Space Requirement of LZ-Index
Combinatorial Pattern Matching, 2006Co-Authors: Diego Arroyuelo, Gonzalo Navarro, Kunihiko SadakaneAbstract:The LZ-index is a compressed full-text self-index able to represent a text P1...m, over an alphabet of size $\sigma = O(\textrm{polylog}(u))$ and with k-th order empirical entropy Hk(T), using 4uHk(T) + o(ulogσ) bits for any k = o(logσu). It can report all the occ occurrences of a pattern P1...m in T in O(m3logσ + (m + occ)logu) worst case time. Its main drawback is the factor 4 in its Space complexity, which makes it larger than other state-of-the-art alternatives. In this paper we present two different approaches to reduce the Space Requirement of LZ-index. In both cases we achieve (2 + e)uHk(T) + o(ulogσ) bits of Space, for any constant e> 0, and we simultaneously improve the search time to O(m2logm + (m + occ)logu). Both indexes support displaying any subtext of length l in optimal O(l/logσu) time. In addition, we show how the Space can be squeezed to (1 + e)uHk(T) + o(ulogσ) to obtain a structure with O(m2) average search time for $m \geqslant 2\log_\sigma{u}$.