Pseudo Instruction

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 942 Experts worldwide ranked by ideXlab platform

Atsushi Igarashi - One of the best experts on this subject based on the ideXlab platform.

  • a guess and assume approach to loop fusion for program verification
    Partial Evaluation and Semantic-Based Program Manipulation, 2017
    Co-Authors: Akifumi Imanishi, Kohei Suenaga, Atsushi Igarashi
    Abstract:

    Loop fusion—a program transformation to merge multiple consecutive loops into a single one—has been studied mainly for compiler optimization. In this paper, we propose a new loop fusion strategy, which can fuse any loops—even loops with data dependence—and show that it is useful for program verification because it can simplify loop invariants. The crux of our loop fusion is the following observation: if the state after the first loop were known, the two loop bodies could be computed at the same time without suffering from data dependence by renaming program variables. Our loop fusion produces a program that guesses the unknown state after the first loop nondeterministically, executes the fused loop where variables are renamed, compares the guessed state and the state actually computed by the fused loop, and, if they do not match, diverges. The last two steps of comparison and divergence are crucial to preserve partial correctness. We call our approach “guess-and-assume” because, in addition to the first step to guess, the last two steps can be expressed by the Pseudo-Instruction assume, used in program verification. We formalize our loop fusion for a simple imperative language and prove that it preserves partial correctness. We further extend the “guess-and-assume” technique to reversing loop execution, which is useful to verify a certain type of consecutive loops. Finally, we confirm by experiments that our transformation techniques are indeed effective for state-of-the-art model checkers to verify a few small programs that they could not.

  • PEPM - A guess-and-assume approach to loop fusion for program verification
    Proceedings of the ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation - PEPM '18, 2017
    Co-Authors: Akifumi Imanishi, Kohei Suenaga, Atsushi Igarashi
    Abstract:

    Loop fusion—a program transformation to merge multiple consecutive loops into a single one—has been studied mainly for compiler optimization. In this paper, we propose a new loop fusion strategy, which can fuse any loops—even loops with data dependence—and show that it is useful for program verification because it can simplify loop invariants. The crux of our loop fusion is the following observation: if the state after the first loop were known, the two loop bodies could be computed at the same time without suffering from data dependence by renaming program variables. Our loop fusion produces a program that guesses the unknown state after the first loop nondeterministically, executes the fused loop where variables are renamed, compares the guessed state and the state actually computed by the fused loop, and, if they do not match, diverges. The last two steps of comparison and divergence are crucial to preserve partial correctness. We call our approach “guess-and-assume” because, in addition to the first step to guess, the last two steps can be expressed by the Pseudo-Instruction assume, used in program verification. We formalize our loop fusion for a simple imperative language and prove that it preserves partial correctness. We further extend the “guess-and-assume” technique to reversing loop execution, which is useful to verify a certain type of consecutive loops. Finally, we confirm by experiments that our transformation techniques are indeed effective for state-of-the-art model checkers to verify a few small programs that they could not.

Akifumi Imanishi - One of the best experts on this subject based on the ideXlab platform.

  • a guess and assume approach to loop fusion for program verification
    Partial Evaluation and Semantic-Based Program Manipulation, 2017
    Co-Authors: Akifumi Imanishi, Kohei Suenaga, Atsushi Igarashi
    Abstract:

    Loop fusion—a program transformation to merge multiple consecutive loops into a single one—has been studied mainly for compiler optimization. In this paper, we propose a new loop fusion strategy, which can fuse any loops—even loops with data dependence—and show that it is useful for program verification because it can simplify loop invariants. The crux of our loop fusion is the following observation: if the state after the first loop were known, the two loop bodies could be computed at the same time without suffering from data dependence by renaming program variables. Our loop fusion produces a program that guesses the unknown state after the first loop nondeterministically, executes the fused loop where variables are renamed, compares the guessed state and the state actually computed by the fused loop, and, if they do not match, diverges. The last two steps of comparison and divergence are crucial to preserve partial correctness. We call our approach “guess-and-assume” because, in addition to the first step to guess, the last two steps can be expressed by the Pseudo-Instruction assume, used in program verification. We formalize our loop fusion for a simple imperative language and prove that it preserves partial correctness. We further extend the “guess-and-assume” technique to reversing loop execution, which is useful to verify a certain type of consecutive loops. Finally, we confirm by experiments that our transformation techniques are indeed effective for state-of-the-art model checkers to verify a few small programs that they could not.

  • PEPM - A guess-and-assume approach to loop fusion for program verification
    Proceedings of the ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation - PEPM '18, 2017
    Co-Authors: Akifumi Imanishi, Kohei Suenaga, Atsushi Igarashi
    Abstract:

    Loop fusion—a program transformation to merge multiple consecutive loops into a single one—has been studied mainly for compiler optimization. In this paper, we propose a new loop fusion strategy, which can fuse any loops—even loops with data dependence—and show that it is useful for program verification because it can simplify loop invariants. The crux of our loop fusion is the following observation: if the state after the first loop were known, the two loop bodies could be computed at the same time without suffering from data dependence by renaming program variables. Our loop fusion produces a program that guesses the unknown state after the first loop nondeterministically, executes the fused loop where variables are renamed, compares the guessed state and the state actually computed by the fused loop, and, if they do not match, diverges. The last two steps of comparison and divergence are crucial to preserve partial correctness. We call our approach “guess-and-assume” because, in addition to the first step to guess, the last two steps can be expressed by the Pseudo-Instruction assume, used in program verification. We formalize our loop fusion for a simple imperative language and prove that it preserves partial correctness. We further extend the “guess-and-assume” technique to reversing loop execution, which is useful to verify a certain type of consecutive loops. Finally, we confirm by experiments that our transformation techniques are indeed effective for state-of-the-art model checkers to verify a few small programs that they could not.

Kohei Suenaga - One of the best experts on this subject based on the ideXlab platform.

  • a guess and assume approach to loop fusion for program verification
    Partial Evaluation and Semantic-Based Program Manipulation, 2017
    Co-Authors: Akifumi Imanishi, Kohei Suenaga, Atsushi Igarashi
    Abstract:

    Loop fusion—a program transformation to merge multiple consecutive loops into a single one—has been studied mainly for compiler optimization. In this paper, we propose a new loop fusion strategy, which can fuse any loops—even loops with data dependence—and show that it is useful for program verification because it can simplify loop invariants. The crux of our loop fusion is the following observation: if the state after the first loop were known, the two loop bodies could be computed at the same time without suffering from data dependence by renaming program variables. Our loop fusion produces a program that guesses the unknown state after the first loop nondeterministically, executes the fused loop where variables are renamed, compares the guessed state and the state actually computed by the fused loop, and, if they do not match, diverges. The last two steps of comparison and divergence are crucial to preserve partial correctness. We call our approach “guess-and-assume” because, in addition to the first step to guess, the last two steps can be expressed by the Pseudo-Instruction assume, used in program verification. We formalize our loop fusion for a simple imperative language and prove that it preserves partial correctness. We further extend the “guess-and-assume” technique to reversing loop execution, which is useful to verify a certain type of consecutive loops. Finally, we confirm by experiments that our transformation techniques are indeed effective for state-of-the-art model checkers to verify a few small programs that they could not.

  • PEPM - A guess-and-assume approach to loop fusion for program verification
    Proceedings of the ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation - PEPM '18, 2017
    Co-Authors: Akifumi Imanishi, Kohei Suenaga, Atsushi Igarashi
    Abstract:

    Loop fusion—a program transformation to merge multiple consecutive loops into a single one—has been studied mainly for compiler optimization. In this paper, we propose a new loop fusion strategy, which can fuse any loops—even loops with data dependence—and show that it is useful for program verification because it can simplify loop invariants. The crux of our loop fusion is the following observation: if the state after the first loop were known, the two loop bodies could be computed at the same time without suffering from data dependence by renaming program variables. Our loop fusion produces a program that guesses the unknown state after the first loop nondeterministically, executes the fused loop where variables are renamed, compares the guessed state and the state actually computed by the fused loop, and, if they do not match, diverges. The last two steps of comparison and divergence are crucial to preserve partial correctness. We call our approach “guess-and-assume” because, in addition to the first step to guess, the last two steps can be expressed by the Pseudo-Instruction assume, used in program verification. We formalize our loop fusion for a simple imperative language and prove that it preserves partial correctness. We further extend the “guess-and-assume” technique to reversing loop execution, which is useful to verify a certain type of consecutive loops. Finally, we confirm by experiments that our transformation techniques are indeed effective for state-of-the-art model checkers to verify a few small programs that they could not.

Chih-hung Chang - One of the best experts on this subject based on the ideXlab platform.

  • A Low Power-Consuming Embedded System Design by Reducing Memory Access Frequencies*This work was supported partially by the National Science Council, Taiwan (NSC-93-2213-E-324-030- and NSC-94-2213-E-035-050-).
    IEICE Transactions on Information and Systems, 2005
    Co-Authors: Ching-wen Chen, Chih-hung Chang
    Abstract:

    When an embedded system is designed, system performance and power consumption have to be taken carefully into consideration. In this paper, we focus on reducing the number of memory access times in embedded systems to improve performance and save power. We use the locality of running programs to reduce the number of memory accesses in order to save power and maximize the performance of an embedded system. We use shorter code words to encode the Instructions that are frequently executed and then pack continuous code words into a Pseudo Instruction. Once the decompression engine fetches one Pseudo Instruction, it can extract multiple Instructions. Therefore, the number of memory access times can be efficiently reduced because of space locality. However, the number of the most frequently executed Instructions is different due to the program size of different applications; that is, the number of memory access times increases when there are less encoded Instructions in a Pseudo Instruction. This situation results in a degradation of system performance and power consumption. To solve this problem, we also propose the use of multiple reference tables. Multiple reference tables will result in the most frequently executed Instructions having shorter encoded code words, thereby improving the performance and power of an embedded system. From our simulation results, our method reduces the memory access frequency by about 60% when a reference table with 256 Instructions is used. In addition, when two reference tables that contain 256 Instructions each are used, the memory access ratio is 10.69% less than the ratio resulting from one reference table with 512 Instructions.

  • designing a high performance and low energy consuming embedded system with considering code compressed environments
    Embedded and Real-Time Computing Systems and Applications, 2005
    Co-Authors: Ching-wen Chen, Changjung Ku, Chih-hung Chang
    Abstract:

    In this paper, we present an embedded system design which is cost-efficient and which considers performance improvement and power consumption based on the frequency with which Instructions are executed. We use the locality of running programs to optimize the use of memory space, system performance, and power consumption; that is, we compress infrequently executed codes to save the use of memory space but compress (encode) frequently executed codes to power consumption and maximize performance. To save the number of memory access times, the frequently executed Instructions are encoded as shorter code words and then the continuous code words are packed into a Pseudo Instruction. Once the decompression engine fetches one Pseudo Instruction, it can extract multiple Instructions to reduce the number of memory access times. In addition, we also propose a design with multiple look-ahead tables that solves the problem which arises due to the reduction in Instructions from one-time memory access which occurs because the great amount of frequently executed Instructions. From our simulation results, our method with one 256-Instruction look-ahead table does not increase the compression ratio, and the ratio of the power consumption can be reduced by about 47.23% than compressing all Instructions. With regard to multiple look-ahead table methods, when one 512-Instruction look-ahead table is used, the ratio of the power consumption is reduced by 37.25% only, however, when two look-ahead tables that contain 256 Instructions in each table are used, the power consumption can be improved by 49.36%. According to the simulation results, our proposed methods based on the frequencies of executed Instructions result in low power consumption, performance improvement and reduced memory space.

  • RTCSA - Designing a high performance and low energy-consuming embedded system with considering code compressed environments
    11th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA'05), 1
    Co-Authors: Ching-wen Chen, Chih-hung Chang
    Abstract:

    In this paper, we present an embedded system design which is cost-efficient and which considers performance improvement and power consumption based on the frequency with which Instructions are executed. We use the locality of running programs to optimize the use of memory space, system performance, and power consumption; that is, we compress infrequently executed codes to save the use of memory space but compress (encode) frequently executed codes to power consumption and maximize performance. To save the number of memory access times, the frequently executed Instructions are encoded as shorter code words and then the continuous code words are packed into a Pseudo Instruction. Once the decompression engine fetches one Pseudo Instruction, it can extract multiple Instructions to reduce the number of memory access times. In addition, we also propose a design with multiple look-ahead tables that solves the problem which arises due to the reduction in Instructions from one-time memory access which occurs because the great amount of frequently executed Instructions. From our simulation results, our method with one 256-Instruction look-ahead table does not increase the compression ratio, and the ratio of the power consumption can be reduced by about 47.23% than compressing all Instructions. With regard to multiple look-ahead table methods, when one 512-Instruction look-ahead table is used, the ratio of the power consumption is reduced by 37.25% only, however, when two look-ahead tables that contain 256 Instructions in each table are used, the power consumption can be improved by 49.36%. According to the simulation results, our proposed methods based on the frequencies of executed Instructions result in low power consumption, performance improvement and reduced memory space.

Ching-wen Chen - One of the best experts on this subject based on the ideXlab platform.

  • A cost- and energy-efficient embedded system design based on Instruction execution frequencies
    Journal of the Chinese Institute of Engineers, 2016
    Co-Authors: An Hsia, Ching-wen Chen, Tzong-jye Liu
    Abstract:

    AbstractIn this paper, an embedded system that takes into account the frequency of the executed Instructions to reduce memory space and save energy consumption is proposed. The object codes are split into the frequently executed Instructions and the infrequently executed Instructions (INFIs) by analyzing the trace files of applications. To reduce the use of memory space, the dictionary-based method is used to compress INFIs. To take into account energy consumption, the top-executed Instructions are selected to encode as shorter codewords and wrapped into a Pseudo Instruction. When a Pseudo Instruction that contains several codewords is fetched, it can be decompressed to several continuous Instructions to reduce the number of memory accesses. In addition, to further reduce energy consumption, a multiple reference table design is proposed to make a Pseudo Instruction contain more encoded codewords by shortening the length of an encoded Instruction. From the simulation results, the proposed design that uses ...

  • A Low Power-Consuming Embedded System Design by Reducing Memory Access Frequencies*This work was supported partially by the National Science Council, Taiwan (NSC-93-2213-E-324-030- and NSC-94-2213-E-035-050-).
    IEICE Transactions on Information and Systems, 2005
    Co-Authors: Ching-wen Chen, Chih-hung Chang
    Abstract:

    When an embedded system is designed, system performance and power consumption have to be taken carefully into consideration. In this paper, we focus on reducing the number of memory access times in embedded systems to improve performance and save power. We use the locality of running programs to reduce the number of memory accesses in order to save power and maximize the performance of an embedded system. We use shorter code words to encode the Instructions that are frequently executed and then pack continuous code words into a Pseudo Instruction. Once the decompression engine fetches one Pseudo Instruction, it can extract multiple Instructions. Therefore, the number of memory access times can be efficiently reduced because of space locality. However, the number of the most frequently executed Instructions is different due to the program size of different applications; that is, the number of memory access times increases when there are less encoded Instructions in a Pseudo Instruction. This situation results in a degradation of system performance and power consumption. To solve this problem, we also propose the use of multiple reference tables. Multiple reference tables will result in the most frequently executed Instructions having shorter encoded code words, thereby improving the performance and power of an embedded system. From our simulation results, our method reduces the memory access frequency by about 60% when a reference table with 256 Instructions is used. In addition, when two reference tables that contain 256 Instructions each are used, the memory access ratio is 10.69% less than the ratio resulting from one reference table with 512 Instructions.

  • designing a high performance and low energy consuming embedded system with considering code compressed environments
    Embedded and Real-Time Computing Systems and Applications, 2005
    Co-Authors: Ching-wen Chen, Changjung Ku, Chih-hung Chang
    Abstract:

    In this paper, we present an embedded system design which is cost-efficient and which considers performance improvement and power consumption based on the frequency with which Instructions are executed. We use the locality of running programs to optimize the use of memory space, system performance, and power consumption; that is, we compress infrequently executed codes to save the use of memory space but compress (encode) frequently executed codes to power consumption and maximize performance. To save the number of memory access times, the frequently executed Instructions are encoded as shorter code words and then the continuous code words are packed into a Pseudo Instruction. Once the decompression engine fetches one Pseudo Instruction, it can extract multiple Instructions to reduce the number of memory access times. In addition, we also propose a design with multiple look-ahead tables that solves the problem which arises due to the reduction in Instructions from one-time memory access which occurs because the great amount of frequently executed Instructions. From our simulation results, our method with one 256-Instruction look-ahead table does not increase the compression ratio, and the ratio of the power consumption can be reduced by about 47.23% than compressing all Instructions. With regard to multiple look-ahead table methods, when one 512-Instruction look-ahead table is used, the ratio of the power consumption is reduced by 37.25% only, however, when two look-ahead tables that contain 256 Instructions in each table are used, the power consumption can be improved by 49.36%. According to the simulation results, our proposed methods based on the frequencies of executed Instructions result in low power consumption, performance improvement and reduced memory space.

  • RTCSA - Designing a high performance and low energy-consuming embedded system with considering code compressed environments
    11th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA'05), 1
    Co-Authors: Ching-wen Chen, Chih-hung Chang
    Abstract:

    In this paper, we present an embedded system design which is cost-efficient and which considers performance improvement and power consumption based on the frequency with which Instructions are executed. We use the locality of running programs to optimize the use of memory space, system performance, and power consumption; that is, we compress infrequently executed codes to save the use of memory space but compress (encode) frequently executed codes to power consumption and maximize performance. To save the number of memory access times, the frequently executed Instructions are encoded as shorter code words and then the continuous code words are packed into a Pseudo Instruction. Once the decompression engine fetches one Pseudo Instruction, it can extract multiple Instructions to reduce the number of memory access times. In addition, we also propose a design with multiple look-ahead tables that solves the problem which arises due to the reduction in Instructions from one-time memory access which occurs because the great amount of frequently executed Instructions. From our simulation results, our method with one 256-Instruction look-ahead table does not increase the compression ratio, and the ratio of the power consumption can be reduced by about 47.23% than compressing all Instructions. With regard to multiple look-ahead table methods, when one 512-Instruction look-ahead table is used, the ratio of the power consumption is reduced by 37.25% only, however, when two look-ahead tables that contain 256 Instructions in each table are used, the power consumption can be improved by 49.36%. According to the simulation results, our proposed methods based on the frequencies of executed Instructions result in low power consumption, performance improvement and reduced memory space.