External Memory

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 46743 Experts worldwide ranked by ideXlab platform

Laura Toma - One of the best experts on this subject based on the ideXlab platform.

  • on External Memory mst sssp and multi way planar graph separation
    Journal of Algorithms, 2004
    Co-Authors: Lars Arge, Gerth Stølting Brodal, Laura Toma
    Abstract:

    Recently External Memory graph problems have received considerable attention because massive graphs arise naturally in many applications involving massive data sets. Even though a large number of I/O-efficient graph algorithms have been developed, a number of fundamental problems still remain open.The results in this paper fall in two main classes. First we develop an improved algorithm for the problem of computing a minimum spanning tree (MST) of a general undirected graph. Second we show that on planar undirected graphs the problems of computing a multi-way graph separation and single source shortest paths (SSSP) can be reduced I/O-efficiently to planar breadth-first search (BFS). Since BFS can be trivially reduced to SSSP by assigning all edges weight one, it follows that in External Memory planar BFS, SSSP, and multi-way separation are equivalent. That is, if any of these problems can be solved I/O-efficiently, then all of them can be solved I/O-efficiently in the same bound. Our planar graph results have subsequently been used to obtain I/O-efficient algorithms for all fundamental problems on planar undirected graphs.

  • External Memory algorithms for diameter and all pairs shortest paths on sparse graphs
    International Colloquium on Automata Languages and Programming, 2004
    Co-Authors: Lars Arge, Ulrich Meyer, Laura Toma
    Abstract:

    We develop I/O-efficient algorithms for diameter and all-pairs shortest-paths (APSP). For general undirected graphs G(V,E) with non-negative edge weights and E/V = o(B/ log V) our approaches are the first to achieve o(V 2) I/Os. We also show that for unweighted undirected graphs, APSP can be solved with just \(O(V \cdot \textrm{sort}(E))\) I/Os. Both our weighted and unweighted approaches require O(V 2) space. For diameter computations we provide I/O-space tradeoffs. Finally, we provide improved results for both diameter and APSP computation on directed planar graphs.

  • on External Memory mst sssp and multi way planar graph separation
    Scandinavian Workshop on Algorithm Theory, 2000
    Co-Authors: Lars Arge, Gerth Stølting Brodal, Laura Toma
    Abstract:

    Recently External Memory graph algorithms have received considerable attention because massive graphs arise naturally in many applications involving massive data sets. Even though a large number of I/O-efficient graph algorithms have been developed, a number of fundamental problems still remain open. In this paper we develop an improved algorithm for the problem of computing a minimum spanning tree of a general graph, as well as new algorithms for the single source shortest paths and the multi-way graph separation problems on planar graphs.

Eric A Hansen - One of the best experts on this subject based on the ideXlab platform.

  • dynamic state space partitioning in External Memory graph search
    International Conference on Automated Planning and Scheduling, 2011
    Co-Authors: Rong Zhou, Eric A Hansen
    Abstract:

    The scalability of optimal sequential planning can be improved by using External-Memory graph search. State-of-the-art External-Memory graph search algorithms rely on a state-space projection function, or hash function, that partitions the stored nodes of the state-space search graph into groups of nodes that are stored as separate files on disk. Search performance depends on properties of the partition; whether the number of unique nodes in a file always fits in RAM, the number of files into which the nodes of the state-space graph are partitioned, and how well the partition captures local structure in the graph. Previous work relies on a static partition of the state space, but it can be difficult for a static partition to simultaneously satisfy all of these criteria. We introduce a method for dynamic partitioning and show that it leads to improved search performance in solving STRIPS planning problems.

  • dynamic state space partitioning in External Memory graph search
    Dagstuhl Seminar Proceedings, 2010
    Co-Authors: Rong Zhou, Eric A Hansen
    Abstract:

    State-of-the-art External-Memory graph search algorithms rely on a hash function, or equivalently, a state-space projection function, that partitions the stored nodes of the state-space search graph into groups of nodes that are stored as separate files on disk. The scalability and efficiency of the search depends on properties of the partition: whether the number of unique nodes in a file always fits in RAM, the number of files into which the nodes of the state-space graph are partitioned, and how well the partitioning of the state space captures local structure in the graph. All previous work relies on a static partitioning of the state space. In this paper, we introduce a method for dynamic partitioning of the state-space search graph and show that it leads to substantial improvement of search performance.

  • parallel structured duplicate detection
    National Conference on Artificial Intelligence, 2007
    Co-Authors: Rong Zhou, Eric A Hansen
    Abstract:

    We describe a novel approach to parallelizing graph search using structured duplicate detection. Structured duplicate detection was originally developed as an approach to External-Memory graph search that reduces the number of expensive disk I/O operations needed to check stored nodes for duplicates, by using an abstraction of the search graph to localize Memory references. In this paper, we show that this approach can also be used to reduce the number of slow synchronization operations needed in parallel graph search. In addition, we describe several techniques for integrating parallel and External-Memory graph search in an efficient way. We demonstrate the effectiveness of these techniques in a graph-search algorithm for domain-independent STRIPS planning.

  • structured duplicate detection in External Memory graph search
    National Conference on Artificial Intelligence, 2004
    Co-Authors: Rong Zhou, Eric A Hansen
    Abstract:

    We consider how to use External Memory, such as disk storage, to improve the scalability of heuristic search in state-space graphs. To limit the number of slow disk I/O operations, we develop a new approach to duplicate detection in graph search that localizes Memory references by partitioning the search graph based on an abstraction of the state space, and expanding the frontier nodes of the graph in an order that respects this partition. We demonstrate the effectiveness of this approach both analytically and empirically.

Lars Arge - One of the best experts on this subject based on the ideXlab platform.

  • on External Memory mst sssp and multi way planar graph separation
    Journal of Algorithms, 2004
    Co-Authors: Lars Arge, Gerth Stølting Brodal, Laura Toma
    Abstract:

    Recently External Memory graph problems have received considerable attention because massive graphs arise naturally in many applications involving massive data sets. Even though a large number of I/O-efficient graph algorithms have been developed, a number of fundamental problems still remain open.The results in this paper fall in two main classes. First we develop an improved algorithm for the problem of computing a minimum spanning tree (MST) of a general undirected graph. Second we show that on planar undirected graphs the problems of computing a multi-way graph separation and single source shortest paths (SSSP) can be reduced I/O-efficiently to planar breadth-first search (BFS). Since BFS can be trivially reduced to SSSP by assigning all edges weight one, it follows that in External Memory planar BFS, SSSP, and multi-way separation are equivalent. That is, if any of these problems can be solved I/O-efficiently, then all of them can be solved I/O-efficiently in the same bound. Our planar graph results have subsequently been used to obtain I/O-efficient algorithms for all fundamental problems on planar undirected graphs.

  • External Memory algorithms for diameter and all pairs shortest paths on sparse graphs
    International Colloquium on Automata Languages and Programming, 2004
    Co-Authors: Lars Arge, Ulrich Meyer, Laura Toma
    Abstract:

    We develop I/O-efficient algorithms for diameter and all-pairs shortest-paths (APSP). For general undirected graphs G(V,E) with non-negative edge weights and E/V = o(B/ log V) our approaches are the first to achieve o(V 2) I/Os. We also show that for unweighted undirected graphs, APSP can be solved with just \(O(V \cdot \textrm{sort}(E))\) I/Os. Both our weighted and unweighted approaches require O(V 2) space. For diameter computations we provide I/O-space tradeoffs. Finally, we provide improved results for both diameter and APSP computation on directed planar graphs.

  • on External Memory mst sssp and multi way planar graph separation
    Scandinavian Workshop on Algorithm Theory, 2000
    Co-Authors: Lars Arge, Gerth Stølting Brodal, Laura Toma
    Abstract:

    Recently External Memory graph algorithms have received considerable attention because massive graphs arise naturally in many applications involving massive data sets. Even though a large number of I/O-efficient graph algorithms have been developed, a number of fundamental problems still remain open. In this paper we develop an improved algorithm for the problem of computing a minimum spanning tree of a general graph, as well as new algorithms for the single source shortest paths and the multi-way graph separation problems on planar graphs.

Rong Zhou - One of the best experts on this subject based on the ideXlab platform.

  • dynamic state space partitioning in External Memory graph search
    International Conference on Automated Planning and Scheduling, 2011
    Co-Authors: Rong Zhou, Eric A Hansen
    Abstract:

    The scalability of optimal sequential planning can be improved by using External-Memory graph search. State-of-the-art External-Memory graph search algorithms rely on a state-space projection function, or hash function, that partitions the stored nodes of the state-space search graph into groups of nodes that are stored as separate files on disk. Search performance depends on properties of the partition; whether the number of unique nodes in a file always fits in RAM, the number of files into which the nodes of the state-space graph are partitioned, and how well the partition captures local structure in the graph. Previous work relies on a static partition of the state space, but it can be difficult for a static partition to simultaneously satisfy all of these criteria. We introduce a method for dynamic partitioning and show that it leads to improved search performance in solving STRIPS planning problems.

  • dynamic state space partitioning in External Memory graph search
    Dagstuhl Seminar Proceedings, 2010
    Co-Authors: Rong Zhou, Eric A Hansen
    Abstract:

    State-of-the-art External-Memory graph search algorithms rely on a hash function, or equivalently, a state-space projection function, that partitions the stored nodes of the state-space search graph into groups of nodes that are stored as separate files on disk. The scalability and efficiency of the search depends on properties of the partition: whether the number of unique nodes in a file always fits in RAM, the number of files into which the nodes of the state-space graph are partitioned, and how well the partitioning of the state space captures local structure in the graph. All previous work relies on a static partitioning of the state space. In this paper, we introduce a method for dynamic partitioning of the state-space search graph and show that it leads to substantial improvement of search performance.

  • parallel structured duplicate detection
    National Conference on Artificial Intelligence, 2007
    Co-Authors: Rong Zhou, Eric A Hansen
    Abstract:

    We describe a novel approach to parallelizing graph search using structured duplicate detection. Structured duplicate detection was originally developed as an approach to External-Memory graph search that reduces the number of expensive disk I/O operations needed to check stored nodes for duplicates, by using an abstraction of the search graph to localize Memory references. In this paper, we show that this approach can also be used to reduce the number of slow synchronization operations needed in parallel graph search. In addition, we describe several techniques for integrating parallel and External-Memory graph search in an efficient way. We demonstrate the effectiveness of these techniques in a graph-search algorithm for domain-independent STRIPS planning.

  • structured duplicate detection in External Memory graph search
    National Conference on Artificial Intelligence, 2004
    Co-Authors: Rong Zhou, Eric A Hansen
    Abstract:

    We consider how to use External Memory, such as disk storage, to improve the scalability of heuristic search in state-space graphs. To limit the number of slow disk I/O operations, we develop a new approach to duplicate detection in graph search that localizes Memory references by partitioning the search graph based on an abstraction of the state space, and expanding the frontier nodes of the graph in an order that respects this partition. We demonstrate the effectiveness of this approach both analytically and empirically.

John Agapiou - One of the best experts on this subject based on the ideXlab platform.

  • hybrid computing using a neural network with dynamic External Memory
    Nature, 2016
    Co-Authors: Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabskabarwinska, Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou
    Abstract:

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an External Memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an External Memory matrix, analogous to the random-access Memory in a conventional computer. Like a conventional computer, it can use its Memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without External read-write Memory.

  • hybrid computing using a neural network with dynamic External Memory
    Nature, 2016
    Co-Authors: Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabskabarwinska, Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou
    Abstract:

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an External Memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an External Memory matrix, analogous to the random-access Memory in a conventional computer. Like a conventional computer, it can use its Memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without External read–write Memory. A ‘differentiable neural computer’ is introduced that combines the learning capabilities of a neural network with an External Memory analogous to the random-access Memory in a conventional computer. Conventional computer algorithms can process extremely large and complex data structures such as the worldwide web or social networks, but they must be programmed manually by humans. Neural networks can learn from examples to recognize complex patterns, but they cannot easily parse and organize complex data structures. Now Alex Graves, Greg Wayne and colleagues have developed a hybrid learning machine, called a differentiable neural computer (DNC), that is composed of a neural network that can read from and write to an External Memory structure analogous to the random-access Memory in a conventional computer. The DNC can thus learn to plan routes on the London Underground, and to achieve goals in a block puzzle, merely by trial and error—without prior knowledge or ad hoc programming for such tasks.

  • hybrid computing using a neural network with dynamic External Memory
    Nature, 2016
    Co-Authors: Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabskabarwinska, Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou
    Abstract:

    A ‘differentiable neural computer’ is introduced that combines the learning capabilities of a neural network with an External Memory analogous to the random-access Memory in a conventional computer.