Memory Model

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 323451 Experts worldwide ranked by ideXlab platform

Frederic T Chong - One of the best experts on this subject based on the ideXlab platform.

  • minos control data attack prevention orthogonal to Memory Model
    International Symposium on Microarchitecture, 2004
    Co-Authors: Jedidiah R Crandall, Frederic T Chong
    Abstract:

    We introduce Minos, a microarchitecture that implements Biba's low-water-mark integrity policy on individual words of data. Minos stops attacks that corrupt control data to hijack program control flow but is orthogonal to the Memory Model. Control data is any data which is loaded into the program counter on control flow transfer, or any data used to calculate such data. The key is that Minos tracks the integrity of all data, but protects control flow by checking this integrity when a program uses the data for control transfer. Existing policies, in contrast, need to differentiate between control and non-control data a priori, a task made impossible by coercions between pointers and other data types such as integers in the C language. Our implementation of Minos for Red Hat Linux 6.2 on a Pentium-based emulator is a stable, usable Linux system on the network on which we are currently running a web server. Our emulated Minos systems running Linux and Windows have stopped several actual attacks. We present a microarchitectural implementation of Minos that achieves negligible impact on cycle time with a small investment in die area, and minor changes to the Linux kernel to handle the tag bits and perform virtual Memory swapping.

  • minos control data attack prevention orthogonal to Memory Model
    International Symposium on Microarchitecture, 2004
    Co-Authors: Jedidiah R Crandall, Frederic T Chong
    Abstract:

    We introduce Minos, a microarchitecture that implements Biba's low-water-mark integrity policy on individual words of data. Minos stops attacks that corrupt control data to hijack program control flow but is orthogonal to the Memory Model. Control data is any data which is loaded into the program counter on control flow transfer, or any data used to calculate such data. The key is that Minos tracks the integrity of all data, but protects control flow by checking this integrity when a program uses the data for control transfer. Existing policies, in contrast, need to differentiate between control and non-control data a priori, a task made impossible by coercions between pointers and other data types such as integers in the C language. Our implementation of Minos for Red Hat Linux 6.2 on a Pentium-based emulator is a stable, usable Linux system on the network on which we are currently running a web server. Our emulated Minos systems running Linux and Windows have stopped several actual attacks. We present a microarchitectural implementation of Minos that achieves negligible impact on cycle time with a small investment in die area, and minor changes to the Linux kernel to handle the tag bits and perform virtual Memory swapping.

Jedidiah R Crandall - One of the best experts on this subject based on the ideXlab platform.

  • minos control data attack prevention orthogonal to Memory Model
    International Symposium on Microarchitecture, 2004
    Co-Authors: Jedidiah R Crandall, Frederic T Chong
    Abstract:

    We introduce Minos, a microarchitecture that implements Biba's low-water-mark integrity policy on individual words of data. Minos stops attacks that corrupt control data to hijack program control flow but is orthogonal to the Memory Model. Control data is any data which is loaded into the program counter on control flow transfer, or any data used to calculate such data. The key is that Minos tracks the integrity of all data, but protects control flow by checking this integrity when a program uses the data for control transfer. Existing policies, in contrast, need to differentiate between control and non-control data a priori, a task made impossible by coercions between pointers and other data types such as integers in the C language. Our implementation of Minos for Red Hat Linux 6.2 on a Pentium-based emulator is a stable, usable Linux system on the network on which we are currently running a web server. Our emulated Minos systems running Linux and Windows have stopped several actual attacks. We present a microarchitectural implementation of Minos that achieves negligible impact on cycle time with a small investment in die area, and minor changes to the Linux kernel to handle the tag bits and perform virtual Memory swapping.

  • minos control data attack prevention orthogonal to Memory Model
    International Symposium on Microarchitecture, 2004
    Co-Authors: Jedidiah R Crandall, Frederic T Chong
    Abstract:

    We introduce Minos, a microarchitecture that implements Biba's low-water-mark integrity policy on individual words of data. Minos stops attacks that corrupt control data to hijack program control flow but is orthogonal to the Memory Model. Control data is any data which is loaded into the program counter on control flow transfer, or any data used to calculate such data. The key is that Minos tracks the integrity of all data, but protects control flow by checking this integrity when a program uses the data for control transfer. Existing policies, in contrast, need to differentiate between control and non-control data a priori, a task made impossible by coercions between pointers and other data types such as integers in the C language. Our implementation of Minos for Red Hat Linux 6.2 on a Pentium-based emulator is a stable, usable Linux system on the network on which we are currently running a web server. Our emulated Minos systems running Linux and Windows have stopped several actual attacks. We present a microarchitectural implementation of Minos that achieves negligible impact on cycle time with a small investment in die area, and minor changes to the Linux kernel to handle the tag bits and perform virtual Memory swapping.

Masato Okada - One of the best experts on this subject based on the ideXlab platform.

  • rate reduction for associative Memory Model in hodgkin huxley type network
    Journal of the Physical Society of Japan, 2008
    Co-Authors: Masafumi Oizumi, Masato Okada, Yoichi Miyawaki
    Abstract:

    We proposed a systematic method for reducing Hodgkin–Huxley-type networks to networks of rate equations on the basis of Shriki et al. 's formulation. Our rate Model predicts the results of numerical simulations of the Hodgkin–Huxley-type network Model very precisely over a broad range of synaptic conductances. However, in the proposed framework, we ad hoc assumed that the firing threshold and the gain of the f – I curve of the Hodgkin–Huxley-type conductance-based Model have a second-order dependence on leak conductance. Here, we discuss optimal Model selection with respect to the dependence of the threshold and the gain on the f – I curve, using the Akaike information criterion. We then apply our rate reduction method to an associative Memory Model of Hodgkin–Huxley neurons. Most associative Memory Models have been studied using two-state neurons or graded-response neurons. We check the correspondence between an associative Memory Model of Hodgkin–Huxley neurons and that of graded-response neurons, parti...

  • retrieval of branching sequences in an associative Memory Model with common external input and bias input
    Journal of the Physical Society of Japan, 2007
    Co-Authors: Kentaro Katahira, Masaki Kawamura, Kazuo Okanoya, Masato Okada
    Abstract:

    We investigate a recurrent neural network Model with common external and bias inputs that can retrieve branching sequences. Retrieval of Memory sequences is one of the most important functions of the brain. A lot of research has been done on neural networks that process Memory sequences. Most of it has focused on fixed Memory sequences. However, many animals can remember and recall branching sequences. Therefore, we propose an associative Memory Model that can retrieve branching sequences. Our Model has bias input and common external input. Kawamura and Okada reported that common external input enables sequential Memory retrieval in an associative Memory Model with auto- and weak cross-correlation connections. We show that retrieval processes along branching sequences are controllable with both the bias input and the common external input. To analyze the behaviors of our Model, we derived the macroscopic dynamical description as a probability density function. The results obtained by our theory agree with...

  • Storage capacity diverges with synaptic efficiency in an associative Memory Model with synaptic delay and pruning
    IEEE Transactions on Neural Networks, 2004
    Co-Authors: Seiji Miyoshi, Masato Okada
    Abstract:

    It is known that storage capacity per synapse increases by synaptic pruning in the case of a correlation-type associative Memory Model. However, the storage capacity of the entire network then decreases. To overcome this difficulty, we propose decreasing the connectivity while keeping the total number of synapses constant by introducing delayed synapses. In this paper, a discrete synchronous-type Model with both delayed synapses and their prunings is discussed as a concrete example of the proposal. First, we explain the Yanai-Kim theory by employing statistical neurodynamics. This theory involves macrodynamical equations for the dynamics of a network with serial delay elements. Next, considering the translational symmetry of the explained equations, we rederive macroscopic steady-state equations of the Model by using the discrete Fourier transformation. The storage capacities are analyzed quantitatively. Furthermore, two types of synaptic prunings are treated analytically: random pruning and systematic pruning. As a result, it becomes clear that in both prunings, the storage capacity increases as the length of delay increases and the connectivity of the synapses decreases when the total number of synapses is constant. Moreover, an interesting fact becomes clear: the storage capacity asymptotically approaches 2//spl pi/ due to random pruning. In contrast, the storage capacity diverges in proportion to the logarithm of the length of delay by systematic pruning and the proportion constant is 4//spl pi/. These results theoretically support the significance of pruning following an overgrowth of synapses in the brain and may suggest that the brain prefers to store dynamic attractors such as sequences and limit cycles rather than equilibrium states.

  • Storage Capacity Diverges with Synaptic Efficiency in an Associative Memory Model with Synaptic Delay and Pruning
    arXiv: Disordered Systems and Neural Networks, 2003
    Co-Authors: Seiji Miyoshi, Masato Okada
    Abstract:

    It is known that storage capacity per synapse increases by synaptic pruning in the case of a correlation-type associative Memory Model. However, the storage capacity of the entire network then decreases. To overcome this difficulty, we propose decreasing the connecting rate while keeping the total number of synapses constant by introducing delayed synapses. In this paper, a discrete synchronous-type Model with both delayed synapses and their prunings is discussed as a concrete example of the proposal. First, we explain the Yanai-Kim theory by employing the statistical neurodynamics. This theory involves macrodynamical equations for the dynamics of a network with serial delay elements. Next, considering the translational symmetry of the explained equations, we re-derive macroscopic steady state equations of the Model by using the discrete Fourier transformation. The storage capacities are analyzed quantitatively. Furthermore, two types of synaptic prunings are treated analytically: random pruning and systematic pruning. As a result, it becomes clear that in both prunings, the storage capacity increases as the length of delay increases and the connecting rate of the synapses decreases when the total number of synapses is constant. Moreover, an interesting fact becomes clear: the storage capacity asymptotically approaches $2/\pi$ due to random pruning. In contrast, the storage capacity diverges in proportion to the logarithm of the length of delay by systematic pruning and the proportion constant is $4/\pi$. These results theoretically support the significance of pruning following an overgrowth of synapses in the brain and strongly suggest that the brain prefers to store dynamic attractors such as sequences and limit cycles rather than equilibrium states.

Sarita V Adve - One of the best experts on this subject based on the ideXlab platform.

  • Memory Models a case for rethinking parallel languages and hardware
    Communications of The ACM, 2010
    Co-Authors: Sarita V Adve, Hansj Boehm
    Abstract:

    Solving the Memory Model problem will require an ambitious and cross-disciplinary research direction.

  • foundations of the c concurrency Memory Model
    Programming Language Design and Implementation, 2008
    Co-Authors: Hansj Boehm, Sarita V Adve
    Abstract:

    Currently multi-threaded C or C++ programs combine a single-threaded programming language with a separate threads library. This is not entirely sound [7]. We describe an effort, currently nearing completion, to address these issues by explicitly providing semantics for threads in the next revision of the C++ standard. Our approach is similar to that recently followed by Java [25], in that, at least for a well-defined and interesting subset of the language, we give sequentially consistent semantics to programs that do not contain data races. Nonetheless, a number of our decisions are often surprising even to those familiar with the Java effort: We (mostly) insist on sequential consistency for race-free programs, in spite of implementation issues that came to light after the Java work. We give no semantics to programs with data races. There are no benign C++ data races. We use weaker semantics for trylock than existing languages or libraries, allowing us to promise sequential consistency with an intuitive race definition, even for programs with trylock. This paper describes the simple Model we would like to be able to provide for C++ threads programmers, and explain how this, together with some practical, but often under-appreciated implementation constraints, drives us towards the above decisions.

  • the java Memory Model
    Symposium on Principles of Programming Languages, 2005
    Co-Authors: Jeremy Manson, William Pugh, Sarita V Adve
    Abstract:

    This paper describes the new Java Memory Model, which has been revised as part of Java 5.0. The Model specifies the legal behaviors for a multithreaded program; it defines the semantics of multithreaded Java programs and partially determines legal implementations of Java virtual machines and compilers.The new Java Model provides a simple interface for correctly synchronized programs -- it guarantees sequential consistency to data-race-free programs. Its novel contribution is requiring that the behavior of incorrectly synchronized programs be bounded by a well defined notion of causality. The causality requirement is strong enough to respect the safety and security properties of Java and weak enough to allow standard compiler and hardware optimizations. To our knowledge, other Models are either too weak because they do not provide for sufficient safety/security, or are too strong because they rely on a strong notion of data and control dependences that precludes some standard compiler transformations.Although the majority of what is currently done in compilers is legal, the new Model introduces significant differences, and clearly defines the boundaries of legal transformations. For example, the commonly accepted definition for control dependence is incorrect for Java, and transformations based on it may be invalid.In addition to providing the official Memory Model for Java, we believe the Model described here could prove to be a useful basis for other programming languages that currently lack well-defined Models, such as C++ and C#.

Jesper Larsson Traff - One of the best experts on this subject based on the ideXlab platform.