Parallel Search

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 62400 Experts worldwide ranked by ideXlab platform

Vipin Kumar - One of the best experts on this subject based on the ideXlab platform.

  • Predicting the Performance of Randomized Parallel Search: An Application to Robot Motion Planning
    Journal of Intelligent and Robotic Systems, 2003
    Co-Authors: Daniel J. Challou, Maria Gini, Vipin Kumar, George Karypis
    Abstract:

    In this paper we discuss methods for predicting the performance of any formulation of randomized Parallel Search, and propose a new performance prediction method that is based on obtaining an accurate estimate of the k-processor run-time distribution. We show that the k-processor prediction method delivers accurate performance predictions and demonstrate the validity of our analysis on several robot motion planning problems.

  • State of the art in Parallel Search techniques for discrete optimization problems
    IEEE Transactions on Knowledge and Data Engineering, 1999
    Co-Authors: Ananth Grama, Vipin Kumar
    Abstract:

    Discrete optimization problems arise in a variety of domains, such as VLSI design, transportation, scheduling and management, and design optimization. Very often, these problems are solved using state space Search techniques. Due to the high computational requirements and inherent Parallel nature of Search techniques, there has been a great deal of interest in the development of Parallel Search methods since the dawn of Parallel computing. Significant advances have been made in the use of powerful heuristics and Parallel processing to solve large-scale discrete optimization problems. Problem instances that were considered computationally intractable only a few years ago are routinely solved currently on server-class symmetric multiprocessors and small workstation clusters. Parallel game-playing programs are challenging the best human minds at games like chess. In this paper, we describe the state of the art in Parallel algorithms used for solving discrete optimization problems. We address heuristic and nonheuristic techniques for Searching graphs as well as trees, and speed-up anomalies in Parallel Search that are caused by the inherent speculative nature of Search techniques.

  • Parallel Search algorithms for discrete optimization problems
    System Modelling and Optimization, 1994
    Co-Authors: Ananth Grama, Vipin Kumar
    Abstract:

    Discrete optimization problems (DOPs) arise in various applications such as planning, scheduling, computer aided design, robotics, game playing, and constraint directed reasoning. Often, a DOP is formulated in terms of finding a minimum cost solution path in a graph from an initial node to a goal node. It is solved using graph/tree Search methods such as backtracking, branch-and-bound, heuristic Search, and dynamic programming. Availability of Parallel computers has created substantial interest in exploring the use of Parallel processing for solving discrete optimization problems. This article provides an overview of our work on Parallel Search algorithms.

  • Parallel Search algorithms for robot motion planning
    International Conference on Robotics and Automation, 1993
    Co-Authors: Daniel J. Challou, Maria Gini, Vipin Kumar
    Abstract:

    The authors show that Parallel Search techniques derived from their sequential counterparts can enable the solution of instances of the robot motion planning problem which are computationally infeasible on sequential machines. A Parallel version of a robot motion planning algorithm based on quasibest first Search with randomized escape from local minima and random backtracking is presented. Its performance on a problem instance, which was computationally infeasible on a single processor of an nCUBE2 multicomputer, is discussed. The limitations of Parallel robot motion planning systems are discussed, and a course for future work is suggested. >

  • ICRA (2) - Parallel Search algorithms for robot motion planning
    [1993] Proceedings IEEE International Conference on Robotics and Automation, 1
    Co-Authors: Daniel J. Challou, Maria Gini, Vipin Kumar
    Abstract:

    The authors show that Parallel Search techniques derived from their sequential counterparts can enable the solution of instances of the robot motion planning problem which are computationally infeasible on sequential machines. A Parallel version of a robot motion planning algorithm based on quasibest first Search with randomized escape from local minima and random backtracking is presented. Its performance on a problem instance, which was computationally infeasible on a single processor of an nCUBE2 multicomputer, is discussed. The limitations of Parallel robot motion planning systems are discussed, and a course for future work is suggested. >

Daniel J. Challou - One of the best experts on this subject based on the ideXlab platform.

  • Predicting the Performance of Randomized Parallel Search: An Application to Robot Motion Planning
    Journal of Intelligent and Robotic Systems, 2003
    Co-Authors: Daniel J. Challou, Maria Gini, Vipin Kumar, George Karypis
    Abstract:

    In this paper we discuss methods for predicting the performance of any formulation of randomized Parallel Search, and propose a new performance prediction method that is based on obtaining an accurate estimate of the k-processor run-time distribution. We show that the k-processor prediction method delivers accurate performance predictions and demonstrate the validity of our analysis on several robot motion planning problems.

  • Parallel Search algorithms for robot motion planning
    International Conference on Robotics and Automation, 1993
    Co-Authors: Daniel J. Challou, Maria Gini, Vipin Kumar
    Abstract:

    The authors show that Parallel Search techniques derived from their sequential counterparts can enable the solution of instances of the robot motion planning problem which are computationally infeasible on sequential machines. A Parallel version of a robot motion planning algorithm based on quasibest first Search with randomized escape from local minima and random backtracking is presented. Its performance on a problem instance, which was computationally infeasible on a single processor of an nCUBE2 multicomputer, is discussed. The limitations of Parallel robot motion planning systems are discussed, and a course for future work is suggested. >

  • ICRA (2) - Parallel Search algorithms for robot motion planning
    [1993] Proceedings IEEE International Conference on Robotics and Automation, 1
    Co-Authors: Daniel J. Challou, Maria Gini, Vipin Kumar
    Abstract:

    The authors show that Parallel Search techniques derived from their sequential counterparts can enable the solution of instances of the robot motion planning problem which are computationally infeasible on sequential machines. A Parallel version of a robot motion planning algorithm based on quasibest first Search with randomized escape from local minima and random backtracking is presented. Its performance on a problem instance, which was computationally infeasible on a single processor of an nCUBE2 multicomputer, is discussed. The limitations of Parallel robot motion planning systems are discussed, and a course for future work is suggested. >

Maria Gini - One of the best experts on this subject based on the ideXlab platform.

  • Predicting the Performance of Randomized Parallel Search: An Application to Robot Motion Planning
    Journal of Intelligent and Robotic Systems, 2003
    Co-Authors: Daniel J. Challou, Maria Gini, Vipin Kumar, George Karypis
    Abstract:

    In this paper we discuss methods for predicting the performance of any formulation of randomized Parallel Search, and propose a new performance prediction method that is based on obtaining an accurate estimate of the k-processor run-time distribution. We show that the k-processor prediction method delivers accurate performance predictions and demonstrate the validity of our analysis on several robot motion planning problems.

  • Parallel Search algorithms for robot motion planning
    International Conference on Robotics and Automation, 1993
    Co-Authors: Daniel J. Challou, Maria Gini, Vipin Kumar
    Abstract:

    The authors show that Parallel Search techniques derived from their sequential counterparts can enable the solution of instances of the robot motion planning problem which are computationally infeasible on sequential machines. A Parallel version of a robot motion planning algorithm based on quasibest first Search with randomized escape from local minima and random backtracking is presented. Its performance on a problem instance, which was computationally infeasible on a single processor of an nCUBE2 multicomputer, is discussed. The limitations of Parallel robot motion planning systems are discussed, and a course for future work is suggested. >

  • ICRA (2) - Parallel Search algorithms for robot motion planning
    [1993] Proceedings IEEE International Conference on Robotics and Automation, 1
    Co-Authors: Daniel J. Challou, Maria Gini, Vipin Kumar
    Abstract:

    The authors show that Parallel Search techniques derived from their sequential counterparts can enable the solution of instances of the robot motion planning problem which are computationally infeasible on sequential machines. A Parallel version of a robot motion planning algorithm based on quasibest first Search with randomized escape from local minima and random backtracking is presented. Its performance on a problem instance, which was computationally infeasible on a single processor of an nCUBE2 multicomputer, is discussed. The limitations of Parallel robot motion planning systems are discussed, and a course for future work is suggested. >

In-joong Kim - One of the best experts on this subject based on the ideXlab platform.

  • SIGMOD Conference - ODYS: an approach to building a massively-Parallel Search engine using a DB-IR tightly-integrated Parallel DBMS for higher-level functionality
    Proceedings of the 2013 international conference on Management of data - SIGMOD '13, 2013
    Co-Authors: Kyu-young Whang, Tae-seob Yun, Yeon-mi Yeo, Il-yeol Song, Hyuk-yoon Kwon, In-joong Kim
    Abstract:

    Recently, Parallel Search engines have been implemented based on scalable distributed file systems such as Google File System. However, we claim that building a massively-Parallel Search engine using a Parallel DBMS can be an attractive alternative since it supports a higher-level (i.e., SQL-level) interface than that of a distributed file system for easy and less error-prone application development while providing scalability. Regarding higher-level functionality, we can draw a Parallel with the traditional O/S file system vs. DBMS. In this paper, we propose a new approach of building a massively-Parallel Search engine using a DB-IR tightly-integrated Parallel DBMS. To estimate the performance, we propose a hybrid (i.e., analytic and experimental) performance model for the Parallel Search engine. We argue that the model can accurately estimate the performance of a massively-Parallel (e.g., 300-node) Search engine using the experimental results obtained from a small-scale (e.g., 5-node) one. We show that the estimation error between the model and the actual experiment is less than 2.13% by observing that the bulk of the query processing time is spent at the slave (vs. at the master and network) and by estimating the time spent at the slave based on actual measurement. Using our model, we demonstrate a commercial-level scalability and performance of our architecture. Our proposed system ODYS is capable of handling 1 billion queries per day (81 queries/sec) for 30 billion Web pages by using only 43,472 nodes with an average query response time of 194 ms. By using twice as many (86,944) nodes, ODYS can provide an average query response time of 148 ms. These results show that building a massively-Parallel Search engine using a Parallel DBMS is a viable approach with advantages of supporting the high-level (i.e., DBMS-level), SQL-like programming interface.

  • ODYS: A Massively-Parallel Search Engine Using a DB-IR Tightly-Integrated Parallel DBMS
    arXiv: Databases, 2012
    Co-Authors: Kyu-young Whang, Tae-seob Yun, Yeon-mi Yeo, Il-yeol Song, Hyuk-yoon Kwon, In-joong Kim
    Abstract:

    Recently, Parallel Search engines have been implemented based on scalable distributed file systems such as Google File System. However, we claim that building a massively-Parallel Search engine using a Parallel DBMS can be an attractive alternative since it supports a higher-level (i.e., SQL-level) interface than that of a distributed file system for easy and less error-prone application development while providing scalability. In this paper, we propose a new approach of building a massively-Parallel Search engine using a DB-IR tightly-integrated Parallel DBMS and demonstrate its commercial-level scalability and performance. In addition, we present a hybrid (i.e., analytic and experimental) performance model for the Parallel Search engine. We have built a five-node Parallel Search engine according to the proposed architecture using a DB-IR tightly-integrated DBMS. Through extensive experiments, we show the correctness of the model by comparing the projected output with the experimental results of the five-node engine. Our model demonstrates that ODYS is capable of handling 1 billion queries per day (81 queries/sec) for 30 billion web pages by using only 43,472 nodes with an average query response time of 211 ms, which is equivalent to or better than those of commercial Search engines. We also show that, by using twice as many (86,944) nodes, ODYS can provide an average query response time of 162 ms, which is significantly lower than those of commercial Search engines.

Kyu-young Whang - One of the best experts on this subject based on the ideXlab platform.

  • odys an approach to building a massively Parallel Search engine using a db ir tightly integrated Parallel dbms for higher level functionality
    International Conference on Management of Data, 2013
    Co-Authors: Kyu-young Whang, Il-yeol Song, Hyuk-yoon Kwon
    Abstract:

    Recently, Parallel Search engines have been implemented based on scalable distributed file systems such as Google File System. However, we claim that building a massively-Parallel Search engine using a Parallel DBMS can be an attractive alternative since it supports a higher-level (i.e., SQL-level) interface than that of a distributed file system for easy and less error-prone application development while providing scalability. Regarding higher-level functionality, we can draw a Parallel with the traditional O/S file system vs. DBMS. In this paper, we propose a new approach of building a massively-Parallel Search engine using a DB-IR tightly-integrated Parallel DBMS. To estimate the performance, we propose a hybrid (i.e., analytic and experimental) performance model for the Parallel Search engine. We argue that the model can accurately estimate the performance of a massively-Parallel (e.g., 300-node) Search engine using the experimental results obtained from a small-scale (e.g., 5-node) one. We show that the estimation error between the model and the actual experiment is less than 2.13% by observing that the bulk of the query processing time is spent at the slave (vs. at the master and network) and by estimating the time spent at the slave based on actual measurement. Using our model, we demonstrate a commercial-level scalability and performance of our architecture. Our proposed system ODYS is capable of handling 1 billion queries per day (81 queries/sec) for 30 billion Web pages by using only 43,472 nodes with an average query response time of 194 ms. By using twice as many (86,944) nodes, ODYS can provide an average query response time of 148 ms. These results show that building a massively-Parallel Search engine using a Parallel DBMS is a viable approach with advantages of supporting the high-level (i.e., DBMS-level), SQL-like programming interface.

  • SIGMOD Conference - ODYS: an approach to building a massively-Parallel Search engine using a DB-IR tightly-integrated Parallel DBMS for higher-level functionality
    Proceedings of the 2013 international conference on Management of data - SIGMOD '13, 2013
    Co-Authors: Kyu-young Whang, Tae-seob Yun, Yeon-mi Yeo, Il-yeol Song, Hyuk-yoon Kwon, In-joong Kim
    Abstract:

    Recently, Parallel Search engines have been implemented based on scalable distributed file systems such as Google File System. However, we claim that building a massively-Parallel Search engine using a Parallel DBMS can be an attractive alternative since it supports a higher-level (i.e., SQL-level) interface than that of a distributed file system for easy and less error-prone application development while providing scalability. Regarding higher-level functionality, we can draw a Parallel with the traditional O/S file system vs. DBMS. In this paper, we propose a new approach of building a massively-Parallel Search engine using a DB-IR tightly-integrated Parallel DBMS. To estimate the performance, we propose a hybrid (i.e., analytic and experimental) performance model for the Parallel Search engine. We argue that the model can accurately estimate the performance of a massively-Parallel (e.g., 300-node) Search engine using the experimental results obtained from a small-scale (e.g., 5-node) one. We show that the estimation error between the model and the actual experiment is less than 2.13% by observing that the bulk of the query processing time is spent at the slave (vs. at the master and network) and by estimating the time spent at the slave based on actual measurement. Using our model, we demonstrate a commercial-level scalability and performance of our architecture. Our proposed system ODYS is capable of handling 1 billion queries per day (81 queries/sec) for 30 billion Web pages by using only 43,472 nodes with an average query response time of 194 ms. By using twice as many (86,944) nodes, ODYS can provide an average query response time of 148 ms. These results show that building a massively-Parallel Search engine using a Parallel DBMS is a viable approach with advantages of supporting the high-level (i.e., DBMS-level), SQL-like programming interface.

  • ODYS: A Massively-Parallel Search Engine Using a DB-IR Tightly-Integrated Parallel DBMS
    arXiv: Databases, 2012
    Co-Authors: Kyu-young Whang, Tae-seob Yun, Yeon-mi Yeo, Il-yeol Song, Hyuk-yoon Kwon, In-joong Kim
    Abstract:

    Recently, Parallel Search engines have been implemented based on scalable distributed file systems such as Google File System. However, we claim that building a massively-Parallel Search engine using a Parallel DBMS can be an attractive alternative since it supports a higher-level (i.e., SQL-level) interface than that of a distributed file system for easy and less error-prone application development while providing scalability. In this paper, we propose a new approach of building a massively-Parallel Search engine using a DB-IR tightly-integrated Parallel DBMS and demonstrate its commercial-level scalability and performance. In addition, we present a hybrid (i.e., analytic and experimental) performance model for the Parallel Search engine. We have built a five-node Parallel Search engine according to the proposed architecture using a DB-IR tightly-integrated DBMS. Through extensive experiments, we show the correctness of the model by comparing the projected output with the experimental results of the five-node engine. Our model demonstrates that ODYS is capable of handling 1 billion queries per day (81 queries/sec) for 30 billion web pages by using only 43,472 nodes with an average query response time of 211 ms, which is equivalent to or better than those of commercial Search engines. We also show that, by using twice as many (86,944) nodes, ODYS can provide an average query response time of 162 ms, which is significantly lower than those of commercial Search engines.

  • CIKM - DB-IR integration and its application to a massively-Parallel Search engine
    Proceeding of the 18th ACM conference on Information and knowledge management - CIKM '09, 2009
    Co-Authors: Kyu-young Whang
    Abstract:

    Nowadays, as there is an increasing need to integrate the DBMS (for structured data) with Information Retrieval (IR) features (for unstructured data), DB-IR integration is becoming one of major challenges in the database area[1,2]. Extensible architectures provided by commercial object-relational DBMS(ORDBMS) vendors can be used for DB-IR integration. Here, extensions are implemented using a high-level (typically, SQL-level) interface. We call this architecture loose-coupling. The advantage of loose-coupling is ease of implementation. But, loose-coupling is not preferable for implementing new data types and operations in large databases when high performance is required. In this talk, we present a new DBMS architecture applicable to DB-IR integration, which we call tight-coupling. In tight-coupling, new data types and operations are integrated into the core of the DBMS engine in the extensible type layer. Thus, they are incorporated as the "first-class citizens"[1] within the DBMS architecture and are supported in a consistent manner with high performance. This tight-coupling architecture is being used to incorporate IR features and spatial database features into the Odysseus ORDBMS that has been under development at KAIST/AITrc for over 19 years. In this talk, we introduce Odysseus and explain its tightly-coupled IR features (U.S. patented in 2002[2]). Then, we demonstrate excellence in performance of tight-coupling by showing benchmark results. We have built a web Search engine that is capable of managing 100 million web pages per node in a non-Parallel configuration using Odysseus. This engine has been successfully tested in many commercial environments. This work won the Best Demonstration Award from the IEEE ICDE conference held in Tokyo, Japan, in April 2005[3]. Last, we present a design of a massively-Parallel Search engine using Odysseus. Recently, Parallel Search engines have been implemented based on scalable distributed file systems (e.g., GFS). Nevertheless, building a massively-Parallel Search engine using a DBMS can be an attractive alternative since it supports a higher-level (i.e., SQL-level) interface than that of a distributed file system while providing scalability. The Parallel Search engine designed is capable of indexing 30 billion web pages with a performance comparable to or better than those of state-of-the-art Search engines.