Handling Large Scale

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 21981 Experts worldwide ranked by ideXlab platform

Hisao Ishibuchi - One of the best experts on this subject based on the ideXlab platform.

  • a decomposition based Large Scale multi modal multi objective optimization algorithm
    Congress on Evolutionary Computation, 2020
    Co-Authors: Yiming Peng, Hisao Ishibuchi
    Abstract:

    A multi-modal multi-objective optimization problem is a special kind of multi-objective optimization problem with multiple Pareto subsets. In this paper, we propose an efficient multi-modal multi-objective optimization algorithm based on the widely used MOEA/D algorithm. In our proposed algorithm, each weight vector has its own sub-population. With a clearing mechanism and a greedy removal strategy, our proposed algorithm can effectively preserve equivalent Pareto optimal solutions (i.e., different Pareto optimal solutions with same objective values). Experimental results show that our proposed algorithm can effectively preserve the diversity of solutions in the decision space when Handling Large-Scale multi-modal multi-objective optimization problems.

Yiming Peng - One of the best experts on this subject based on the ideXlab platform.

  • a decomposition based Large Scale multi modal multi objective optimization algorithm
    Congress on Evolutionary Computation, 2020
    Co-Authors: Yiming Peng, Hisao Ishibuchi
    Abstract:

    A multi-modal multi-objective optimization problem is a special kind of multi-objective optimization problem with multiple Pareto subsets. In this paper, we propose an efficient multi-modal multi-objective optimization algorithm based on the widely used MOEA/D algorithm. In our proposed algorithm, each weight vector has its own sub-population. With a clearing mechanism and a greedy removal strategy, our proposed algorithm can effectively preserve equivalent Pareto optimal solutions (i.e., different Pareto optimal solutions with same objective values). Experimental results show that our proposed algorithm can effectively preserve the diversity of solutions in the decision space when Handling Large-Scale multi-modal multi-objective optimization problems.

Bharadwaj Veeravalli - One of the best experts on this subject based on the ideXlab platform.

  • Handling Large Scale sar image data on network based compute systems using divisible load paradigm
    High Performance Computing and Communications, 2020
    Co-Authors: Gokul Madathupalyam Chinnappan, Bharadwaj Veeravalli
    Abstract:

    Processing Synthetic Aperture Radar (SAR) image data on compute clusters is a challenging problem. One of the ways to handle such computationally intensive tasks is to design efficient load distribution strategies for a given compute infrastructure. In this paper, we adopt the Divisible Load paradigm to handle SAR image processing by using Multi-instalment scheduling (MIS) strategies that have conclusively proven to minimize the idle times. In this paper, we design, analyse, and evaluate a practically viable load distribution strategy, referred to as Multi-Instalment Scheduling with Results Retrieval (MIS-RR), by considering the communication and computational overheads. In addition, we consider the results retrieval phase as a part of our solution procedure. This makes the formulation complete as it takes into account of all real-world influencing parameters in modelling and this is a first-work-of-its-kind contribution in employing a periodic MIS strategy that includes results retrieval phase together with system overhead parameters. We present a detailed theoretical analysis followed by a rigorous performance evaluation that captures the behaviour of the strategy. We evaluate our strategy based on a number of influencing parameters such as network scalability, number of instalments, overheads and then attempt to identify the maximal number of processors to use for a given size of the load. This latter result is of practical importance for system administrators in optimizing the resources to be deployed for a given load.

  • performance characterization on Handling Large Scale partitionable workloads on heterogeneous networked compute platforms
    IEEE Transactions on Parallel and Distributed Systems, 2017
    Co-Authors: Xiaoli Wang, Bharadwaj Veeravalli
    Abstract:

    Multi-installment scheduling (MIS) has shown great effectiveness in minimizing the processing time for Large-Scale partitionable workloads. To derive an optimal MIS strategy, one has to explicitly determine optimal numbers of installments and processors. Existing studies tend to solve this problem by treating the influence of number of installments (and processors) w.r.t processing time as time-continuous functions and taking the derivative of these functions to determine the optimal values, which may lead to invalid solutions. In this paper, we employ periodic multi-installment scheduling (P-MIS) models for homogeneous and heterogeneous single-level tree networks . Using these models we make the following significant contributions. First, we derive a closed-form solution for an optimal number of installments based on a given network size and a fixed load distribution sequence. Second, we propose a heuristic algorithm for determining an optimal number of processors by first proving several important intermediate lemmas and theorems. Third, for heterogeneous systems, we propose a genetic algorithm to determine an optimal load distribution sequence. Finally, we conduct various experiments to illustrate the effectiveness of the proposed algorithms and perform rigorous analysis on the influence of load distribution sequence on processing time, on the basis of which a practical advice for determining a near-optimal load distribution sequence is given.

  • a genetic algorithm based efficient static load distribution strategy for Handling Large Scale workloads on sustainable computing systems
    Decision Support Systems, 2017
    Co-Authors: Xiaoli Wang, Bharadwaj Veeravalli
    Abstract:

    A key challenge faced by Large-Scale computing platforms to go green is the effective utilization of energy at the various processing nodes. Most existing scheduling models assume that processors are able to stay online forever. In reality, processors, however, may have arbitrary unavailable time periods. Hence, if we inadvertently assign tasks to processors without considering the availability constraints, some processors would not be able to finish their assigned workloads. Thus all the unfinished workloads need to be reassigned to other available processors resulting in an inefficient time and energy schedule. In this chapter, we propose a novel processor availability-aware divisible-load scheduling model. Using this model, we design a time-efficient genetic algorithm based global optimization technique to derive an optimal load distribution strategy. Our experimental results show that the proposed algorithm adapts to minimize the processing time, hence the energy consumption too, by over 60% compared to other strategies.

  • On Handling Large-Scale Polynomial Multiplications in Compute Cloud Environments using Divisible Load Paradigm
    IEEE Transactions on Aerospace and Electronic Systems, 2012
    Co-Authors: Ganesh Neelakanta Iyer, Bharadwaj Veeravalli, Sakthi Ganesh Krishnamoorthy
    Abstract:

    Large-Scale polynomial product computations often used in aerospace applications such as satellite image processing and sensor networks data processing always pose considerable challenge when processed on networked computing systems. With non-zero communication and computation time delays of the links and processors on a networked infrastructure, the computation becomes all the more challenging. In this research, we attempt to investigate the use of a divisible load paradigm to design efficient strategies to minimize the overall processing time for performing Large-Scale polynomial product computations in compute cloud environments. We consider a compute cloud system with the resource allocator distributing the entire load to a set of virtual CPU instances (VCI) and the VCIs propagating back the processed results to resource allocator for postprocessing. We consider heterogeneous networks in our analysis and we derive fundamental recursive equations and a closed-form solution for the load fractions to be assigned to each VCI. Our analysis also attempts to eliminate any redundant VCI-link pairs by carefully considering the overheads associated with load distribution and processing. Finally, we quantify the performance of the strategies via rigorous simulation studies.

Paolo Toth - One of the best experts on this subject based on the ideXlab platform.

  • a railway timetable rescheduling approach for Handling Large Scale disruptions
    Transportation Science, 2016
    Co-Authors: Lucas P Veelenturf, Martin Philip Kidd, Valentina Cacchiani, Leo Kroon, Paolo Toth
    Abstract:

    On a daily basis, Large-Scale disruptions require infrastructure managers and railway operators to reschedule their railway timetables together with their rolling stock and crew schedules. This research focuses on timetable rescheduling for passenger train services on a macroscopic level in a railway network. An integer linear programming model is formulated for solving the timetable rescheduling problem, which minimizes the number of cancelled and delayed train services while adhering to infrastructure and rolling stock capacity constraints. The possibility of rerouting train services to reduce the number of cancelled and delayed train services is also considered. In addition, all stages of the disruption management process from the start of the disruption to the time the normal situation is restored are taken into account. Computational tests of the described model on a heavily used part of the Dutch railway network show that the model is able to find optimal solutions in short computation times. This makes the approach applicable for use in practice.

  • a railway timetable rescheduling approach for Handling Large Scale disruptions
    ERIM report series research in management Erasmus Research Institute of Management, 2014
    Co-Authors: Lucas P Veelenturf, Martin Philip Kidd, Valentina Cacchiani, Leo Kroon, Paolo Toth
    Abstract:

    On a daily basis, relatively Large disruptions require infrastructure managers and railway operators to reschedule their railway timetables together with their rolling stock and crew schedules. This research focuses on timetable rescheduling for passenger trains at a macroscopic level in a railway network. An integer programming model is formulated for solving the timetable rescheduling problem, which minimizes the number of cancelled and delayed trains while adhering to infrastructure and rolling stock capacity constraints. The possibility of rerouting trains in order to reduce the number of cancelled and delayed trains is also considered. In addition, all stages of the disruption management process (from the start of the disruption to the time the normal situation is restored) are taken into account. Computational tests of the described model on a heavily used part of the Dutch railway network show that we are able to find optimal solutions in short computation times. This makes the approach applicable for use in practice.

Ye Tang - One of the best experts on this subject based on the ideXlab platform.

  • structurally enhanced incremental neural learning for image classification with subgraph extraction
    International Journal of Neural Systems, 2014
    Co-Authors: Yubin Yang, Yang Gao, Hujun Yin, Ye Tang
    Abstract:

    In this paper, a structurally enhanced incremental neural learning technique is proposed to learn a discriminative codebook representation of images for effective image classification applications. In order to accommodate the relationships such as structures and distributions among visual words into the codebook learning process, we develop an online codebook graph learning method based on a novel structurally enhanced incremental learning technique, called as "visualization-induced self-organized incremental neural network (ViSOINN)". The hidden structural information in the images is embedded into the graph representation evolving dynamically with the adaptive and competitive learning mechanism. Afterwards, image features can be coded using a sub-graph extraction process based on the learned codebook graph, and a classifier is subsequently used to complete the image classification task. Compared with other codebook learning algorithms originated from the classical Bag-of-Features (BoF) model, ViSOINN holds the following advantages: (1) it learns codebook efficiently and effectively from a small training set; (2) it models the relationships among visual words in metric scaling fashion, so preserving high discriminative power; (3) it automatically learns the codebook without a fixed pre-defined size; and (4) it enhances and preserves better the structure of the data. These characteristics help to improve image classification performance and make it more suitable for Handling Large-Scale image classification tasks. Experimental results on the widely used Caltech-101 and Caltech-256 benchmark datasets demonstrate that ViSOINN achieves markedly improved performance and reduces the computational cost considerably.

  • codebook quantization for image classification using incremental neural learning and subgraph extraction
    Intelligent Data Engineering and Automated Learning, 2012
    Co-Authors: Ye Tang, Yubin Yang, Yang Gao, Yao Zhang, Yingchun Cao
    Abstract:

    This paper proposes a fast, incremental codebook quantization algorithm for image classification consisting of a fast codebook graph learning algorithm using incremental neural learning, and a subgraph-based coding method. Comparing with the algorithms based on classic Bag-of-Features (BOF) model, it holds the following advantages: 1) it learns codebook fast and effectively simply using a few training data; 2) it models relationships among visual words to guarantee higher discriminative power; 3) it automatically learns codebook with appropriate size. The above characteristics make our method more suitable for Handling Large-Scale image classification tasks. Experimental results on Caltech-101 and Caltech-256 datasets demonstrate that the proposed algorithm achieves better performance while decreasing the computational cost remarkably.