The Experts below are selected from a list of 23466 Experts worldwide ranked by ideXlab platform
Isabelle Puaut - One of the best experts on this subject based on the ideXlab platform.
-
wcet directed dynamic scratchpad Memory Allocation of data
Euromicro Conference on Real-Time Systems, 2007Co-Authors: J F Deverge, Isabelle PuautAbstract:Many embedded systems feature processors coupled with a small and fast scratchpad Memory. To the difference with caches, Allocation of data to scratchpad Memory must be handled by software. The major gain is to enhance the predictability of Memory accesses latencies. A compile-time dynamic Allocation approach enables eviction and placement of data to the scratchpad Memory at runtime. Previous dynamic scratchpad Memory Allocation approaches aimed to reduce average-case program execution time or the energy consumption due to Memory accesses. For real-time systems, worst-case execution time is the main metric to optimize. In this paper, we propose a WCET-directed algorithm to dynamically allocate static data and stack data of a program to scratchpad Memory. The granularity of placement of Memory transfers (e.g. on function, basic block boundaries) is discussed from the perspective of its computation complexity and the quality of Allocation.
-
Real-time performance of dynamic Memory Allocation algorithms
Proceedings 14th Euromicro Conference on Real-Time Systems. Euromicro RTS 2002, 2002Co-Authors: Isabelle PuautAbstract:Dynamic Memory management is an important aspect of modern software engineering techniques. However developers of real-time systems avoid using it because they fear that the worst-case execution time of the dynamic Memory Allocation routines is not bounded or is bounded with an excessively large bound. The degree to which this concern is valid is quantified in this paper by giving detailed average and worst-case measurements of the timing performance of a comprehensive panel of dynamic Memory allocators. For each allocator we compare its worst-case behavior obtained analytically with the worst timing behavior observed by executing real and synthetic workloads, and with its average timing performance. The results provide a guideline to developers of real-time systems to choose whether to use dynamic Memory management or not, and which dynamic Allocation algorithm should be preferred from the viewpoint of predictability.
Marc Sevaux - One of the best experts on this subject based on the ideXlab platform.
-
Solving dynamic Memory Allocation problems in embedded systems with parallel variable neighborhood search strategies
Electronic Notes in Discrete Mathematics, 2015Co-Authors: Jesús Sánchez-oro, André Rossi, Marc Sevaux, Rafael Marti, Abraham DuarteAbstract:Embedded systems have become an essential part of our lives, thanks to their evolution in the recent years, but the main drawback is their power consumption. This paper is focused on improving the Memory Allocation of embedded systems to reduce their power consumption. We propose a parallel variable neighborhood search algorithm for the dynamic Memory Allocation problem, and compare it with the state of the art. Computational results and statistical tests applied show that the proposed algorithm produces significantly better outcomes than the previous algorithm in shorter computing time.
-
GRASP with ejection chains for the dynamic Memory Allocation in embedded systems
Soft Computing, 2014Co-Authors: Marc Sevaux, Maria Soto, André Rossi, Abraham Duarte, Rafael MartiAbstract:In the design of electronic embedded systems, the Allocation of data structures to Memory banks is a main challenge faced by designers. Indeed, if this optimization problem is solved correctly, a great improvement in terms of efficiency can be obtained. In this paper, we consider the dynamic Memory Allocation problem, where data structures have to be assigned to Memory banks in different time periods during the execution of the application. We propose a GRASP to obtain high quality solutions in short computational time, as required in this type of problem. Moreover, we also explore the adaptation of the ejection chain methodology, originally proposed in the context of tabu search, for improved outcomes. Our experiments with real and randomly generated instances show the superiority of the proposed methods compared to the state-of-the-art method.
-
Iterative approaches for a dynamic Memory Allocation problem in embedded systems
European Journal of Operational Research, 2013Co-Authors: Maria Soto, André Rossi, Marc SevauxAbstract:Memory Allocation has a significant impact on energy consumption in embedded systems. In this paper, we are interested in dynamic Memory Allocation for embedded systems with a special emphasis on time performance. We propose two mid-term iterative approaches which are compared with existing long-term and short-term approaches, and with an ILP formulation as well. These approaches rely on solving a static version of the Allocation problem and they take advantage of previous works for addressing the static problem. A statistic analysis is carried out for showing that the mid-term approach is the best one in terms of solution quality.
-
two iterative metaheuristic approaches to dynamic Memory Allocation for embedded systems
European conference on Evolutionary Computation in Combinatorial Optimization, 2011Co-Authors: Maria Soto, André Rossi, Marc SevauxAbstract:Electronic embedded systems designers aim at finding a tradeoff between cost and power consumption. As cache Memory management has been shown to have a significant impact on power consumption, this paper addresses dynamic Memory Allocation for embedded systems with a special emphasis on time performance. In this work, time is split into time intervals, into which the application to be implemented by the embedded system requires accessing to data structures. The proposed iterative metaheuristics aim at determining which data structure should be stored in cache Memory at each time interval in order to minimize reAllocation and conflict costs. These approaches take advantage of metaheuristics previously designed for a static Memory Allocation problem.
Rajeev Barua - One of the best experts on this subject based on the ideXlab platform.
-
Memory Allocation for embedded systems with a compile time unknown scratch pad size
ACM Transactions in Embedded Computing Systems, 2009Co-Authors: Nghi Nguyen, Angel Dominguez, Rajeev BaruaAbstract:This article presents the first Memory Allocation scheme for embedded systems having a scratch-pad Memory whose size is unknown at compile time. A scratch-pad Memory (SPM) is a fast compiler-managed SRAM that replaces the hardware-managed cache. All existing Memory Allocation schemes for SPM require the SPM size to be known at compile time. Unfortunately, because of this constraint, the resulting executable is tied to that size of SPM and is not portable to other processor implementations having a different SPM size. Size-portable code is valuable when programs are downloaded during deployment either via a network or portable media. Code downloads are used for fixing bugs or for enhancing functionality. The presence of different SPM sizes in different devices is common because of the evolution in VLSI technology across years. The result is that SPM cannot be used in such situations with downloaded codes. To overcome this limitation, our work presents a compiler method whose resulting executable is portable across SPMs of any size. Our technique is to employ a customized installer software, which decides the SPM Allocation just before the program's first run, since the SPM size can be discovered at that time. The installer then, based on the decided Allocation, modifies the program executable accordingly. The resulting executable places frequently used objects in SPM, considering both code and data for placement. To keep the overhead low, much of the preprocessing for the Allocation is done at compile time. Results show that our benchmarks average a 41p speedup versus an all-DRAM Allocation, while the optimal static Allocation scheme, which knows the SPM size at compile time and is thus an unachievable upper-bound and is only slightly faster (45p faster than all-DRAM). Results also show that the overhead from our customized installer averages about 1.5p in code size, 2p in runtime, and 3p in compile time for our benchmarks.
-
Memory Allocation for embedded systems with a compile time unknown scratch pad size
Compilers Architecture and Synthesis for Embedded Systems, 2005Co-Authors: Nghi Nguyen, Angel Dominguez, Rajeev BaruaAbstract:This paper presents the first Memory Allocation scheme for embedded systems having scratch-pad Memory whose size is unknown at compile time. A scratch-pad Memory (SPM) is a fast compiler-managed SRAM that replaces the hardware-managed cache. Its uses are motivated by its better real-time guarantees as compared to cache and by its significantly lower overheads in energy consumption, area and access time.Existing data Allocation schemes for SPM all require that the SPM size be known at compile-time. Unfortunately, the resulting executable is tied to that size of SPM and is not portable to processor implementations having a different SPM size. Such portability would be valuable in situations where programs for an embedded system are not burned into the system at the time of manufacture, but rather are downloaded onto it during deployment, either using a network or portable media such as Memory sticks. Such post-deployment code updates are common in distributed networks and in personal hand-held devices. The presence of different SPM sizes in different devices is common because of the evolution in VLSI technology across years. The result is that SPM cannot be used in such situations with downloaded code.To overcome this limitation, this work presents a compiler method whose resulting executable is portable across SPMs of any size. The executable at run-time places frequently used objects in SPM; it considers code, global variables and stack variables for placement in SPM. The Allocation is decided by modified loader software before the program is first run and once the SPM size can be discovered. The loader then modifies the program binary based on the decided Allocation. To keep the overhead low, much of the pre-processing for the Allocation is done at compile-time. Results show that our benchmarks average a 36% speed increase versus an all-DRAM Allocation, while the optimal static Allocation scheme, which knows the SPM size at compile-time and is thus an un-achievable upper-bound, is only slightly faster (41% faster than all-DRAM). Results also show that the overhead from our embedded loader averages about 1% in both code-size and run-time of our benchmarks.
-
compiler decided dynamic Memory Allocation for scratch pad based embedded systems
Compilers Architecture and Synthesis for Embedded Systems, 2003Co-Authors: Sumesh Udayakumaran, Rajeev BaruaAbstract:This paper presents a highly predictable, low overhead and yet dynamic, Memory Allocation strategy for embedded systems with scratch-pad Memory. A scratch-pad is a fast compiler-managed SRAM Memory that replaces the hardware-managed cache. It is motivated by its better real-time guarantees vs cache and by its significantly lower overheads in energy consumption, area and overall runtime, even with a simple Allocation scheme [4].Existing scratch-pad Allocation methods are of two types. First, software-caching schemes emulate the workings of a hardware cache in software. Instructions are inserted before each load/store to check the software-maintained cache tags. Such methods incur large overheads in runtime, code size, energy consumption and SRAM space for tags and deliver poor real-time guarantees just like hardware caches. A second category of algorithms partitionsm variables at compile-time into the two banks. For example, our previous work in [3] derives a provably optimal static Allocation for global and stack variables and achieves a speedup over all earlier methods. However, a drawback of such static Allocation schemes is that they do not account for dynamic program behavior. It is easy to see why a data Allocation that never changes at runtime cannot achieve the full locality benefits of a cache.In this paper we present a dynamic Allocation method for global and stack data that for the first time, (i) accounts for changing program requirements at runtime (ii) has no software-caching tags (iii) requires no run-time checks (iv) has extremely low overheads, and (v) yields 100% predictable Memory access times. In this method data that is about to be accessed frequently is copied into the SRAM using compiler-inserted code at fixed and infrequent points in the program. Earlier data is evicted if necessary. When compared to a provably optimal static Allocation our results show runtime reductions ranging from 11% to 38%, averaging 31.2%, using no additional hardware support. With hardware support for pseudo-DMA and full DMA, which is already provided in some commercial systems, the runtime reductions increase to 33.4% and 34.2% respectively.
Jiman Hong - One of the best experts on this subject based on the ideXlab platform.
-
dynamic Memory allocator for sensor operating system design and analysis
Journal of Information Science and Engineering, 2010Co-Authors: Hong Min, Yookun Cho, Jiman HongAbstract:Dynamic Memory Allocation is an important mechanism used in operating systems. An efficient dynamic Memory allocator can improve the performance of operating systems. In wireless sensor networks, sensor nodes have miniature computing device, small Memory space and very limited battery power. Therefore, it is important that sensor operating systems operate efficiently in terms of energy consumption and resource management. And the role of dynamic Memory allocator in sensor operating system is more important than one of general operating system. In this paper, we propose a new dynamic Memory Allocation scheme that resolves the existing problems in dynamic Memory allocators. We implemented our scheme on Nano-Qplus which is a sensor operating system based on multi-threading. Our experimental results and static analysis result show our scheme performs efficiently in terms of the execution time and the Memory space compared with existing Memory Allocation mechanisms.
-
an efficient dynamic Memory allocator for sensor operating systems
ACM Symposium on Applied Computing, 2007Co-Authors: Hong Min, Yookun Cho, Jiman HongAbstract:Dynamic Memory Allocation mechanism is important aspect of operating system, because an efficient dynamic Memory allocator improves the performance of operating systems. In wireless sensor networks, sensor nodes have miniature computing device, small Memory space and very limited battery power. Therefore, sensor operating systems should be able to operate efficiently in terms of energy consumption and resource management. And the role of dynamic Memory allocator in sensor operating system is more important than one of general operating system. In this pager, we propose new dynamic Memory Allocation scheme that solves problems of existing dynamic Memory allocators. We implement our scheme on Nano-Qplus which is a sensor operating system base on multi-threading. Our experimental results show our scheme performs efficiently in both time and space compared with existing Memory Allocation mechanism.
Maria Soto - One of the best experts on this subject based on the ideXlab platform.
-
GRASP with ejection chains for the dynamic Memory Allocation in embedded systems
Soft Computing, 2014Co-Authors: Marc Sevaux, Maria Soto, André Rossi, Abraham Duarte, Rafael MartiAbstract:In the design of electronic embedded systems, the Allocation of data structures to Memory banks is a main challenge faced by designers. Indeed, if this optimization problem is solved correctly, a great improvement in terms of efficiency can be obtained. In this paper, we consider the dynamic Memory Allocation problem, where data structures have to be assigned to Memory banks in different time periods during the execution of the application. We propose a GRASP to obtain high quality solutions in short computational time, as required in this type of problem. Moreover, we also explore the adaptation of the ejection chain methodology, originally proposed in the context of tabu search, for improved outcomes. Our experiments with real and randomly generated instances show the superiority of the proposed methods compared to the state-of-the-art method.
-
Iterative approaches for a dynamic Memory Allocation problem in embedded systems
European Journal of Operational Research, 2013Co-Authors: Maria Soto, André Rossi, Marc SevauxAbstract:Memory Allocation has a significant impact on energy consumption in embedded systems. In this paper, we are interested in dynamic Memory Allocation for embedded systems with a special emphasis on time performance. We propose two mid-term iterative approaches which are compared with existing long-term and short-term approaches, and with an ILP formulation as well. These approaches rely on solving a static version of the Allocation problem and they take advantage of previous works for addressing the static problem. A statistic analysis is carried out for showing that the mid-term approach is the best one in terms of solution quality.
-
two iterative metaheuristic approaches to dynamic Memory Allocation for embedded systems
European conference on Evolutionary Computation in Combinatorial Optimization, 2011Co-Authors: Maria Soto, André Rossi, Marc SevauxAbstract:Electronic embedded systems designers aim at finding a tradeoff between cost and power consumption. As cache Memory management has been shown to have a significant impact on power consumption, this paper addresses dynamic Memory Allocation for embedded systems with a special emphasis on time performance. In this work, time is split into time intervals, into which the application to be implemented by the embedded system requires accessing to data structures. The proposed iterative metaheuristics aim at determining which data structure should be stored in cache Memory at each time interval in order to minimize reAllocation and conflict costs. These approaches take advantage of metaheuristics previously designed for a static Memory Allocation problem.