Memory Space

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 18453 Experts worldwide ranked by ideXlab platform

Ozcan Ozturk - One of the best experts on this subject based on the ideXlab platform.

  • Reducing Memory Space consumption through dataflow analysis
    Computer Languages Systems & Structures, 2011
    Co-Authors: Ozcan Ozturk
    Abstract:

    Memory is a key parameter in embedded systems since both code complexity of embedded applications and amount of data they process are increasing. While it is true that the Memory capacity of embedded systems is continuously increasing, the increases in the application complexity and dataset sizes are far greater. As a consequence, the Memory Space demand of code and data should be kept minimum. To reduce the Memory Space consumption of embedded systems, this paper proposes a control flow graph (CFG) based technique. Specifically, it tracks the lifetime of instructions at the basic block level. Based on the CFG analysis, if a basic block is known to be not accessible in the rest of the program execution, the instruction Memory Space allocated to this basic block is reclaimed. On the other hand, if the Memory allocated to this basic block cannot be reclaimed, we try to compress this basic block. This way, it is possible to effectively use the available on-chip Memory, thereby satisfying most of instruction/data requests from the on-chip Memory. Our experiments with this framework show that it outperforms the previously proposed CFG-based Memory reduction approaches.

  • On-chip Memory Space partitioning for chip multiprocessors using polyhedral algebra
    Iet Computers and Digital Techniques, 2010
    Co-Authors: Ozcan Ozturk, Mahmut Kandemir, Mary Jane Irwin
    Abstract:

    One of the most important issues in designing a chip multiprocessor is to decide its on-chip Memory organisation. While it is possible to design an application-specific Memory architecture, this may not necessarily be the best option, in particular when storage demands of individual processors and/or their data sharing patterns can change from one point in execution to another for the same application. Here, two problems are formulated. First, we show how a polyhedral method can be used to design, for array-based data-intensive embedded applications, an application-specific hybrid Memory architecture that has both shared and private components. We evaluate the resulting Memory configurations using a set of benchmarks and compare them to pure private and pure shared Memory on-chip multiprocessor architectures. The second approach proposed consider dynamic configuration of software-managed on-chip Memory Space to adapt to the runtime variations in data storage demand and interprocessor sharing patterns. The proposed framework is fully implemented using an optimising compiler, a polyhedral tool, and a Memory partitioner (based on integer linear programming), and is tested using a suite of eight data-intensive embedded applications.

  • ISVLSI - An integer linear programming based approach to simultaneous Memory Space partitioning and data allocation for chip multiprocessors
    IEEE Computer Society Annual Symposium on Emerging VLSI Technologies and Architectures (ISVLSI'06), 2006
    Co-Authors: Ozcan Ozturk, Mahmut Kandemir, Guangyu Chen, M. Karakoy
    Abstract:

    The trends in advanced integrated circuit technologies require us to look for new ways to utilize large numbers of gates and reduce the effects of high interconnect delays. One promising research direction is chip multiprocessors that integrate multiple processors on the same die. Among the components of a chip multiprocessor, its Memory subsystem is maybe the most critical one, since it shapes both power and performance characteristics of the resulting design. Motivated by this observation, this paper addresses the problem of decomposing (partitioning) on-chip Memory Space across parallel processors and allocating data across Memory components in an integrated manner. In the most general case, the resulting Memory architecture is a hybrid one, where some Memory components are accessed privately, whereas the others are shared by two or more processors. The proposed approach for achieving this has two complementary components: an optimizing compiler and an ILP (integer linear programming) solver. The role of the compiler in this approach is to analyze the application code and detect the interprocess or data sharing patterns, given the loop parallelization information. The job of the ILP solver, on the other hand, is to determine the sizes of the on-chip Memory components, how these Memory components are shared across multiple processors in the system, and what data each component holds. In other words, we address the problem of integrated Memory Space partitioning and data allocation for chip multiprocessors.

  • CODES+ISSS - Increasing on-chip Memory Space utilization for embedded chip multiprocessors through data compression
    Proceedings of the 3rd IEEE ACM IFIP international conference on Hardware software codesign and system synthesis - CODES+ISSS '05, 2005
    Co-Authors: Mahmut Kandemir, Mary Jane Irwin, Ozcan Ozturk
    Abstract:

    Minimizing the number of off-chip Memory references is very important in chip multiprocessors from both the performance and power perspectives. To achieve this the distance between successive reuses of the same data block must be reduced. However, this may not be possible in many cases due to data dependences between computations assigned to different processors. This paper focuses on software-managed on-chip Memory Space utilization for embedded chip multiprocessors and proposes a compression-based approach to reduce the Memory Space occupied by data blocks with large inter-processor reuse distances. The proposed approach has two major components: a compiler and an ILP (integer linear programming) solver. The compiler's job is to analyze the application code and extract information on data access patterns. This access pattern information is then passed to our ILP solver, which determines the data blocks to compress/decompress and the times (the program points) at which to compress/decompress them. We tested the effectiveness of this ILP based approach using access patterns extracted by our compiler from application codes. Our experimental results reveal that the proposed approach is very effective in reducing power consumption. Moreover, it leads to a lower energy consumption than an alternate scheme evaluated in our experiments for all the test cases studied.

  • BB-GC: Basic-Block Level Garbage Collection
    2005
    Co-Authors: Ozcan Ozturk, Mahmut Kandemir, Mary Jane Irwin
    Abstract:

    Memory Space limitation is a serious problem for many embedded systems from diverse application domains. While circuit/packaging techniques are definitely important to squeeze large quantities of data/ instruction into small size memories typically employed by embedded systems, software can also play a crucial role in reducing Memory Space demands of embedded applications. This paper focuses on a software-managed two-level Memory hierarchy and instruction accesses. Our goal is to reduce on-chip Memory requirements of a given application as much as possible, so that the Memory Space saved can be used by other simultaneously-executing applications. The proposed approach achieves this by tracking the lifetime of instructions. Specifically, when an instruction is dead (i.e., it could not be visited again in the rest of execution), we deallocate the on-chip Memory Space allocated to it. Working on the control flow graph representation of an embedded application, our approach performs basic block-level garbage collection for on-chip memories.

Mahmut Kandemir - One of the best experts on this subject based on the ideXlab platform.

  • On-chip Memory Space partitioning for chip multiprocessors using polyhedral algebra
    Iet Computers and Digital Techniques, 2010
    Co-Authors: Ozcan Ozturk, Mahmut Kandemir, Mary Jane Irwin
    Abstract:

    One of the most important issues in designing a chip multiprocessor is to decide its on-chip Memory organisation. While it is possible to design an application-specific Memory architecture, this may not necessarily be the best option, in particular when storage demands of individual processors and/or their data sharing patterns can change from one point in execution to another for the same application. Here, two problems are formulated. First, we show how a polyhedral method can be used to design, for array-based data-intensive embedded applications, an application-specific hybrid Memory architecture that has both shared and private components. We evaluate the resulting Memory configurations using a set of benchmarks and compare them to pure private and pure shared Memory on-chip multiprocessor architectures. The second approach proposed consider dynamic configuration of software-managed on-chip Memory Space to adapt to the runtime variations in data storage demand and interprocessor sharing patterns. The proposed framework is fully implemented using an optimising compiler, a polyhedral tool, and a Memory partitioner (based on integer linear programming), and is tested using a suite of eight data-intensive embedded applications.

  • SoCC - Exploiting large on-chip Memory Space through data recomputation
    23rd IEEE International SOC Conference, 2010
    Co-Authors: Mahmut Kandemir, Ehat Ercanli
    Abstract:

    This paper presents a novel on-chip Memory Space utilization strategy for architectures that accommodate large on-chip software-managed memories. In such architectures, the access latencies of data blocks are typically proportional to the distance between the processor and the requested data. Considering such an on-chip Memory hierarchy, we propose to recompute the value of an on-chip data, which is far from the processor, using the closer data elements instead of directly accessing the far data if it is beneficial to do so in terms of performance. This paper presents the details of a compiler algorithm that implements the proposed approach and reports the experimental data collected using six data-intensive applications programs. Our experimental evaluation indicates 8.2% performance improvement, on the average, over a state-of-the-art on-chip Memory management strategy and shows consistent improvements for varying on-chip Memory sizes and different data access latencies.

  • ISVLSI - An integer linear programming based approach to simultaneous Memory Space partitioning and data allocation for chip multiprocessors
    IEEE Computer Society Annual Symposium on Emerging VLSI Technologies and Architectures (ISVLSI'06), 2006
    Co-Authors: Ozcan Ozturk, Mahmut Kandemir, Guangyu Chen, M. Karakoy
    Abstract:

    The trends in advanced integrated circuit technologies require us to look for new ways to utilize large numbers of gates and reduce the effects of high interconnect delays. One promising research direction is chip multiprocessors that integrate multiple processors on the same die. Among the components of a chip multiprocessor, its Memory subsystem is maybe the most critical one, since it shapes both power and performance characteristics of the resulting design. Motivated by this observation, this paper addresses the problem of decomposing (partitioning) on-chip Memory Space across parallel processors and allocating data across Memory components in an integrated manner. In the most general case, the resulting Memory architecture is a hybrid one, where some Memory components are accessed privately, whereas the others are shared by two or more processors. The proposed approach for achieving this has two complementary components: an optimizing compiler and an ILP (integer linear programming) solver. The role of the compiler in this approach is to analyze the application code and detect the interprocess or data sharing patterns, given the loop parallelization information. The job of the ILP solver, on the other hand, is to determine the sizes of the on-chip Memory components, how these Memory components are shared across multiple processors in the system, and what data each component holds. In other words, we address the problem of integrated Memory Space partitioning and data allocation for chip multiprocessors.

  • ASP-DAC - Maximizing data reuse for minimizing Memory Space requirements and execution cycles
    Proceedings of the 2006 conference on Asia South Pacific design automation - ASP-DAC '06, 2006
    Co-Authors: Mahmut Kandemir, Guangyu Chen, Feihui Li
    Abstract:

    Embedded systems in the form of vehicles and mobile devices such as wireless phones, automatic banking machines and new multi-modal devices operate under tight Memory and power constraints. Therefore, their performance demands must be balanced very well against their Memory Space requirements and power consumption. Automatic tools that can optimize for Memory Space utilization and performance are expected to be increasingly important in the future as increasingly larger portions of embedded designs are being implemented in software. In this paper, we describe a novel optimization framework that can be used in two different ways: (i) deciding a suitable on-chip Memory capacity for a given code, and (ii) restructuring the application code to make better use of the available on-chip Memory Space. While prior proposals have addressed these two questions, the solutions proposed in this paper are very aggressive in extracting and exploiting all data reuse in the application code, restricted only by inherent data dependences.

  • CDES - Using Task Recomputation During Application Mapping in Parallel Embedded Architectures.
    2006
    Co-Authors: Suleyman Tosun, Mahmut Kandemir
    Abstract:

    Many Memory-sensitive embedded applications can tolerate small performance degradations if doing so can reduce the Memory Space requirements significantly. This paper explores this tradeoff by proposing and evaluating an algorithm for performing recomputations in select program points to reduce Memory Space occupation of data. Our algorithm targets heterogeneous computing platforms and operates with two user-specified parameters that bound performance degradation of the resulting code and its Memory Space demand. It explores the solution Space, performing recomputations (instead of Memory stores) for select tasks to reduce Memory Space demand.

Mary Jane Irwin - One of the best experts on this subject based on the ideXlab platform.

  • On-chip Memory Space partitioning for chip multiprocessors using polyhedral algebra
    Iet Computers and Digital Techniques, 2010
    Co-Authors: Ozcan Ozturk, Mahmut Kandemir, Mary Jane Irwin
    Abstract:

    One of the most important issues in designing a chip multiprocessor is to decide its on-chip Memory organisation. While it is possible to design an application-specific Memory architecture, this may not necessarily be the best option, in particular when storage demands of individual processors and/or their data sharing patterns can change from one point in execution to another for the same application. Here, two problems are formulated. First, we show how a polyhedral method can be used to design, for array-based data-intensive embedded applications, an application-specific hybrid Memory architecture that has both shared and private components. We evaluate the resulting Memory configurations using a set of benchmarks and compare them to pure private and pure shared Memory on-chip multiprocessor architectures. The second approach proposed consider dynamic configuration of software-managed on-chip Memory Space to adapt to the runtime variations in data storage demand and interprocessor sharing patterns. The proposed framework is fully implemented using an optimising compiler, a polyhedral tool, and a Memory partitioner (based on integer linear programming), and is tested using a suite of eight data-intensive embedded applications.

  • CODES+ISSS - Increasing on-chip Memory Space utilization for embedded chip multiprocessors through data compression
    Proceedings of the 3rd IEEE ACM IFIP international conference on Hardware software codesign and system synthesis - CODES+ISSS '05, 2005
    Co-Authors: Mahmut Kandemir, Mary Jane Irwin, Ozcan Ozturk
    Abstract:

    Minimizing the number of off-chip Memory references is very important in chip multiprocessors from both the performance and power perspectives. To achieve this the distance between successive reuses of the same data block must be reduced. However, this may not be possible in many cases due to data dependences between computations assigned to different processors. This paper focuses on software-managed on-chip Memory Space utilization for embedded chip multiprocessors and proposes a compression-based approach to reduce the Memory Space occupied by data blocks with large inter-processor reuse distances. The proposed approach has two major components: a compiler and an ILP (integer linear programming) solver. The compiler's job is to analyze the application code and extract information on data access patterns. This access pattern information is then passed to our ILP solver, which determines the data blocks to compress/decompress and the times (the program points) at which to compress/decompress them. We tested the effectiveness of this ILP based approach using access patterns extracted by our compiler from application codes. Our experimental results reveal that the proposed approach is very effective in reducing power consumption. Moreover, it leads to a lower energy consumption than an alternate scheme evaluated in our experiments for all the test cases studied.

  • BB-GC: Basic-Block Level Garbage Collection
    2005
    Co-Authors: Ozcan Ozturk, Mahmut Kandemir, Mary Jane Irwin
    Abstract:

    Memory Space limitation is a serious problem for many embedded systems from diverse application domains. While circuit/packaging techniques are definitely important to squeeze large quantities of data/ instruction into small size memories typically employed by embedded systems, software can also play a crucial role in reducing Memory Space demands of embedded applications. This paper focuses on a software-managed two-level Memory hierarchy and instruction accesses. Our goal is to reduce on-chip Memory requirements of a given application as much as possible, so that the Memory Space saved can be used by other simultaneously-executing applications. The proposed approach achieves this by tracking the lifetime of instructions. Specifically, when an instruction is dead (i.e., it could not be visited again in the rest of execution), we deallocate the on-chip Memory Space allocated to it. Working on the control flow graph representation of an embedded application, our approach performs basic block-level garbage collection for on-chip memories.

  • DATE - BB-GC: Basic-Block Level Garbage Collection
    Design Automation and Test in Europe, 2005
    Co-Authors: Ozcan Ozturk, Mahmut Kandemir, Mary Jane Irwin
    Abstract:

    Memory Space limitation is a serious problem for many embedded systems from diverse application domains. While circuit/packaging techniques are definitely important to squeeze large quantities of data/ instruction into small size memories typically employed by embedded systems, software can also play a crucial role in reducing Memory Space demands of embedded applications. This paper focuses on a software-managed two-level Memory hierarchy and instruction accesses. Our goal is to reduce on-chip Memory requirements of a given application as much as possible, so that the Memory Space saved can be used by other simultaneously-executing applications. The proposed approach achieves this by tracking the lifetime of instructions. Specifically, when an instruction is dead (i.e., it could not be visited again in the rest of execution), we deallocate the on-chip Memory Space allocated to it. Working on the control flow graph representation of an embedded application, our approach performs basic block-level garbage collection for on-chip memories.

Qian Chuan Zhao - One of the best experts on this subject based on the ideXlab platform.

  • A method based on Kolmogorov complexity to improve the efficiency of strategy optimization with limited Memory Space
    2006 American Control Conference, 2006
    Co-Authors: Qian Chuan Zhao, Yu-chi Ho
    Abstract:

    The pervasive application of digital computer in control and optimization techniques forces us to consider the constraint of limited Memory Space when dealing with large scale practical systems. As an example, we consider the famous Witsenhausen counterexample with the new constraint of limited Memory Space in this paper. The main difficulty is how to sample strategies that can be stored in the given Memory Space efficiently. The concept of Kolmogorov complexity measures the minimal Memory Space to store a strategy (i.e., simple strategies), but is incomputable. To overcome this difficulty, we propose a method based on ordered binary decision diagram to sample only simple strategies. Besides the high sampling efficiency which is demonstrated by numerical testing, the proposed sampling method can be easily combined with optimization algorithms and performance evaluation techniques. As an example, we show how to combine ordinal optimization, numerical integration, and the proposed sampling method to solve the Witsenhausen problem with the constraint of limited Memory Space. We hope this work can shed some insights to computer-based optimization problems with Memory Space constraint in a more general situation.

  • ICRA - A SVM-based method for engine maintenance strategy optimization
    Proceedings 2006 IEEE International Conference on Robotics and Automation 2006. ICRA 2006., 2006
    Co-Authors: Qian Chuan Zhao
    Abstract:

    Due to the abundant application background, the optimization of maintenance problem has been extensively studied in the past decades. Besides the well-known difficulty of large state Space and large action Space, the pervasive application of digital computers forces us to consider the new constraint of limited Memory Space. The given Memory Space restricts what strategies can be explored during the optimization procedure. By explicitly quantifying the minimal Memory Space to store a strategy using support vector machine, we propose to describe simple strategies exactly and only approximate complex strategies. This selective approximation can best utilize the given Memory Space for any description mechanism. We use numerical results on illustrative examples to show how the selective approximation improves the solution quality. We hope this work sheds some insights to best utilize the Memory Space for practical engine maintenance strategy optimization problems

  • A SVM-based method for engine maintenance strategy optimization
    Proceedings - IEEE International Conference on Robotics and Automation, 2006
    Co-Authors: Qing-shan Jia, Qian Chuan Zhao
    Abstract:

    Due to the abundant application background, the optimization of maintenance problem has been extensively studied in the past decades. Besides the well-known difficulty of large state Space and large action Space, the pervasive application of digital computers forces us to consider the new constraint of limited Memory Space. The given Memory Space restricts what strategies can be explored during the optimization procedure. By explicitly quantifying the minimal Memory Space to store a strategy using support vector machine, we propose to describe simple strategies exactly and only approximate complex strategies. This selective approximation can best utilize the given Memory Space for any description mechanism. We use numerical results on illustrative examples to show how the selective approximation improves the solution quality. We hope this work sheds some insights to best utilize the Memory Space for practical engine maintenance strategy optimization problems

M Kandemir - One of the best experts on this subject based on the ideXlab platform.

  • exploiting shared scratch pad Memory Space in embedded multiprocessor systems
    Design Automation Conference, 2002
    Co-Authors: M Kandemir, J Ramanujam, Alok Choudhary
    Abstract:

    In this paper, we present a compiler strategy to optimize data accesses in regular array-intensive applications running on embedded multiprocessor environments. Specifically, we propose an optimization algorithm that targets the reduction of extra off-chip Memory accesses caused by inter-processor communication. This is achieved by increasing the application-wide reuse of data that resides in the scratch-pad memories of processors. Our experimental results obtained on four array-intensive image processing applications indicate that exploiting inter-processor data sharing can reduce the energy-delay product by as much as 33.8% (and 24.3% on average) on a four-processor embedded system. The results also show that the proposed strategy is robust in the sense that it gives consistently good results over a wide range of several architectural parameters.

  • dynamic management of scratch pad Memory Space
    Design Automation Conference, 2001
    Co-Authors: M Kandemir, J Ramanujam, J Irwin, Narayanan Vijaykrishnan, I Kadayif, A Parikh
    Abstract:

    Optimizations aimed at improving the efficiency of on-chip memories are extremely important. We propose a compiler-controlled dynamic on-chip scratch-pad Memory (SPM) management framework that uses both loop and data transformations. Experimental results obtained using a generic cost model indicate significant reductions in data transfer activity between SPM and off-chip Memory.