Iterative Improvement

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Wenyong Deng - One of the best experts on this subject based on the ideXlab platform.

  • cluster aware Iterative Improvement techniques for partitioning large vlsi circuits
    ACM Transactions on Design Automation of Electronic Systems, 2002
    Co-Authors: Shantanu Dutt, Wenyong Deng
    Abstract:

    Move-based Iterative Improvement partitioning (IIP) methods, such as the Fiduccia-Mattheyses (FM) algorithm [Fidducia and Mattheyses 1982] and Krishnamurthy's Look-Ahead (LA) algorithm [Krishnamurthy 1984], are widely used in VLSI CAD applications, largely due to their time efficiency and ease of implementation. This class of algorithms is of the "local/greedy Improvement" type, and they generate relatively high-quality results for small and medium-size circuits. However, as VLSI circuits become larger, these algorithms suffer a rapid deterioration in solution quality. We propose new IIP methods CLIP and CDIP that select cells to move with a view to moving clusters that straddle the two subsets of a partition, into one of the subsets. The new algorithms significantly improve partition quality while preserving the advantage of time efficiency. Experimental results on 25 medium to large-size ACM/SIGDA benchmark circuits show up to 70p Improvement over FM in mincut, and average mincut Improvements of about 35p over all circuits and 47p over large circuits. They also outperform state-of-the-art non-IIP techniques, the quadratic-programming-based method Paraboli [Reiss et al. 1994] and the spectral partitioner MELO [Alpert and Yao 1995], by about 17p and 23p, respectively, with less CPU time. This demonstrates the potential of sophisticated IIP algorithms in dealing with the increasing complexity of emerging VLSI circuits. We also compare CLIP and CDIP to hMetis [Karypis et al. 1997], one of the best of the recent state-of-the-art partitioners that are based on the multilevel paradigm (others include MLc [Alpert et al. 1997] and LSR/MFFS [Cong et al. 1997]). The results show that one scheme of hMetis is 8p worse than CLIP/CDIP and the other two schemes are only 2--4p better. However, CLIP/CDIP have advantages over hMetis and other multilevel partitioners that outweigh these minimal mincut Improvements. The first is much faster times-to-solution (for example, one of our best schemes CLIP-LA2 is 6.4 and 11.75 times faster than the two best hMetis schemes) and much better scalability with circuit size (e.g., for the largest circuit with about 162K nodes, CLIP-LA2 is 10.4 and and 21.5 times faster and obtains better solution qualities than the two best hMetis schemes). Second, CLIP/CDIP are "flat" partitioners, while multilevel techniques perform a sequence of node clustering/coarsening before partitioning the circuit. In complex placement applications such as timing-driven placement in the presence of multiple constraints, such circuit coarsening can hide crucial information needed for good-quality solutions, thus making the partitioning process oblivious to them. This, however, is not a problem with flat partitioners like CLIP/CDIP that can take all important parameters into account while partitioning. All these advantages make CLIP/CDIP suitable for use in complex physical design problems for large, deep-submicron VLSI circuits.

  • vlsi circuit partitioning by cluster removal using Iterative Improvement techniques
    International Conference on Computer Aided Design, 1996
    Co-Authors: Shantanu Dutt, Wenyong Deng
    Abstract:

    Move-based Iterative Improvement partitioning methods such as the Fiduccia-Mattheyses (FM) algorithm and Krishnamurthy's Look-Ahead (LA) algorithm are widely used in VLSI CAD applications largely due to their time efficiency and ease of implementation. This class of algorithms is of the "local Improvement" type. They generate relatively high quality results for small and medium size circuits. However, as VLSI circuits become larger, these algorithms are not so effective on them as direct partitioning tools. We propose new Iterative-Improvement methods that select cells to move with a view to moving clusters that straddle the two subsets of a partition into one of the subsets. The new algorithms significantly improve partition quality while preserving the advantage of time efficiency. Experimental results on 25 medium to large size ACM/SIGDA benchmark circuits show up to 70% Improvement over FM in cutsize, with an average of per-circuit percent Improvements of about 25%, and a total cut Improvement of about 35%. They also outperform the recent placement-based partitioning tool Paraboli and the spectral partitioner MELO by about 17% and 23%, respectively, with less CPU time. This demonstrates the potential of Iterative Improvement algorithms in dealing with the increasing complexity of modern VLSI circuitry.

  • a probability based approach to vlsi circuit partitioning
    Design Automation Conference, 1996
    Co-Authors: Shantanu Dutt, Wenyong Deng
    Abstract:

    Iterative-Improvement 2-way min-cut partitioning is an important phase in most circuit partitioning tools. Most Iterative Improvement techniques for circuit netlists like the Fidducia-Mattheyses (FM) method compute the gains of nodes using local netlist information that is only concerned with the immediate Improvement in the cutset. This can lead to misleading gain calculations. Krishnamurthy suggested a lookahead (LA) gain calculation method to ameliorate this situation; however, as we show, it leaves considerable room for Improvement. We present here a probabilistic gain computation approach called PROP that is capable of capturing the global and future implications of moving a node at the current time. Experimental results show that for the same number of runs, PROP performs much better than FM (by about 30%) and LA (by about 27%), and is also better than many recent state-of-the-art clustering-based partitioners like EIG1, WINDOW, MELO and PARABOLI by 15% to 57%. We also show that the space and time complexities of PROP are very reasonable. Our empirical timing results reveal that it is appreciably faster than the above clustering-based techniques, and only a little slower than FM and LA, both of which are very fast.

Jan Karel Lenstra - One of the best experts on this subject based on the ideXlab platform.

  • a computational study of local search algorithms for job shop scheduling
    Informs Journal on Computing, 1994
    Co-Authors: E H L Aarts, Jan Karel Lenstra, P J M Van Laarhoven, N L J Ulder
    Abstract:

    We present a computational performance analysis of local search algorithms for job shop scheduling. The algorithms under Investigation are Iterative Improvement, simulated annealing, threshold accepting, and genetic local search. Our study shows that simulated annealing performs best in the sense that it finds better solutions than the other algorithms within the same amount of running time. Compared to more tailored algorithms, simulated annealing still finds the best results but only under the assumption that running time is of no concern. Compared to tabu search, simulated annealing is outperformed especially with respect to running times. INFORMS Journal on Computing , ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.

  • job shop scheduling by local search
    Memorandum COSOR, 1994
    Co-Authors: Rob J M Vaessens, Ehl Emile Aarts, Jan Karel Lenstra
    Abstract:

    We present a computational performance analysis of local search algorithms for job shop scheduling. The algorithms under investigation are Iterative Improvement, simulated annealing, threshold accepting and genetic local search. Our study shows that simulated annealing performs best in the sense that it finds better solutions than the other algorithms within the same amount of running time. Compared to more tailored algorithms, simulated annealing still finds the best results but only under the assumption that running time is of no concern.

  • job shop scheduling by simulated annealing
    Operations Research, 1992
    Co-Authors: Peter J M Van Laarhoven, E H L Aarts, Jan Karel Lenstra
    Abstract:

    We describe an approximation algorithm for the problem of finding the minimum makespan in a job shop. The algorithm is based on simulated annealing, a generalization of the well known Iterative Improvement approach to combinatorial optimization problems. The generalization involves the acceptance of cost-increasing transitions with a nonzero probability to avoid getting stuck in local minima. We prove that our algorithm asymptotically converges in probability to a globally minimal solution, despite the fact that the Markov chains generated by the algorithm are generally not irreducible. Computational experiments show that our algorithm can find shorter makespans than two recent approximation approaches that are more tailored to the job shop scheduling problem. This is, however, at the cost of large running times.

Shantanu Dutt - One of the best experts on this subject based on the ideXlab platform.

  • cluster aware Iterative Improvement techniques for partitioning large vlsi circuits
    ACM Transactions on Design Automation of Electronic Systems, 2002
    Co-Authors: Shantanu Dutt, Wenyong Deng
    Abstract:

    Move-based Iterative Improvement partitioning (IIP) methods, such as the Fiduccia-Mattheyses (FM) algorithm [Fidducia and Mattheyses 1982] and Krishnamurthy's Look-Ahead (LA) algorithm [Krishnamurthy 1984], are widely used in VLSI CAD applications, largely due to their time efficiency and ease of implementation. This class of algorithms is of the "local/greedy Improvement" type, and they generate relatively high-quality results for small and medium-size circuits. However, as VLSI circuits become larger, these algorithms suffer a rapid deterioration in solution quality. We propose new IIP methods CLIP and CDIP that select cells to move with a view to moving clusters that straddle the two subsets of a partition, into one of the subsets. The new algorithms significantly improve partition quality while preserving the advantage of time efficiency. Experimental results on 25 medium to large-size ACM/SIGDA benchmark circuits show up to 70p Improvement over FM in mincut, and average mincut Improvements of about 35p over all circuits and 47p over large circuits. They also outperform state-of-the-art non-IIP techniques, the quadratic-programming-based method Paraboli [Reiss et al. 1994] and the spectral partitioner MELO [Alpert and Yao 1995], by about 17p and 23p, respectively, with less CPU time. This demonstrates the potential of sophisticated IIP algorithms in dealing with the increasing complexity of emerging VLSI circuits. We also compare CLIP and CDIP to hMetis [Karypis et al. 1997], one of the best of the recent state-of-the-art partitioners that are based on the multilevel paradigm (others include MLc [Alpert et al. 1997] and LSR/MFFS [Cong et al. 1997]). The results show that one scheme of hMetis is 8p worse than CLIP/CDIP and the other two schemes are only 2--4p better. However, CLIP/CDIP have advantages over hMetis and other multilevel partitioners that outweigh these minimal mincut Improvements. The first is much faster times-to-solution (for example, one of our best schemes CLIP-LA2 is 6.4 and 11.75 times faster than the two best hMetis schemes) and much better scalability with circuit size (e.g., for the largest circuit with about 162K nodes, CLIP-LA2 is 10.4 and and 21.5 times faster and obtains better solution qualities than the two best hMetis schemes). Second, CLIP/CDIP are "flat" partitioners, while multilevel techniques perform a sequence of node clustering/coarsening before partitioning the circuit. In complex placement applications such as timing-driven placement in the presence of multiple constraints, such circuit coarsening can hide crucial information needed for good-quality solutions, thus making the partitioning process oblivious to them. This, however, is not a problem with flat partitioners like CLIP/CDIP that can take all important parameters into account while partitioning. All these advantages make CLIP/CDIP suitable for use in complex physical design problems for large, deep-submicron VLSI circuits.

  • vlsi circuit partitioning by cluster removal using Iterative Improvement techniques
    International Conference on Computer Aided Design, 1996
    Co-Authors: Shantanu Dutt, Wenyong Deng
    Abstract:

    Move-based Iterative Improvement partitioning methods such as the Fiduccia-Mattheyses (FM) algorithm and Krishnamurthy's Look-Ahead (LA) algorithm are widely used in VLSI CAD applications largely due to their time efficiency and ease of implementation. This class of algorithms is of the "local Improvement" type. They generate relatively high quality results for small and medium size circuits. However, as VLSI circuits become larger, these algorithms are not so effective on them as direct partitioning tools. We propose new Iterative-Improvement methods that select cells to move with a view to moving clusters that straddle the two subsets of a partition into one of the subsets. The new algorithms significantly improve partition quality while preserving the advantage of time efficiency. Experimental results on 25 medium to large size ACM/SIGDA benchmark circuits show up to 70% Improvement over FM in cutsize, with an average of per-circuit percent Improvements of about 25%, and a total cut Improvement of about 35%. They also outperform the recent placement-based partitioning tool Paraboli and the spectral partitioner MELO by about 17% and 23%, respectively, with less CPU time. This demonstrates the potential of Iterative Improvement algorithms in dealing with the increasing complexity of modern VLSI circuitry.

  • a probability based approach to vlsi circuit partitioning
    Design Automation Conference, 1996
    Co-Authors: Shantanu Dutt, Wenyong Deng
    Abstract:

    Iterative-Improvement 2-way min-cut partitioning is an important phase in most circuit partitioning tools. Most Iterative Improvement techniques for circuit netlists like the Fidducia-Mattheyses (FM) method compute the gains of nodes using local netlist information that is only concerned with the immediate Improvement in the cutset. This can lead to misleading gain calculations. Krishnamurthy suggested a lookahead (LA) gain calculation method to ameliorate this situation; however, as we show, it leaves considerable room for Improvement. We present here a probabilistic gain computation approach called PROP that is capable of capturing the global and future implications of moving a node at the current time. Experimental results show that for the same number of runs, PROP performs much better than FM (by about 30%) and LA (by about 27%), and is also better than many recent state-of-the-art clustering-based partitioners like EIG1, WINDOW, MELO and PARABOLI by 15% to 57%. We also show that the space and time complexities of PROP are very reasonable. Our empirical timing results reveal that it is appreciably faster than the above clustering-based techniques, and only a little slower than FM and LA, both of which are very fast.

Douglas L Mcwilliams - One of the best experts on this subject based on the ideXlab platform.

  • Iterative Improvement to solve the parcel hub scheduling problem
    Computers & Industrial Engineering, 2010
    Co-Authors: Douglas L Mcwilliams
    Abstract:

    This paper presents Iterative Improvement algorithms to solve the parcel hub scheduling problem (PHSP). The PHSP is combinatorial optimization problem that consists of scheduling a set of inbound trailers to a small number of unload docks. At the unload docks, the inbound trailers must be unloaded and the parcel sorted and loaded onto outbound trailers. Because the transfer operation is labor intensive, the transfer of parcels must be done in such a way as to minimize the timespan of the transfer operation. Local search (LS) and simulated annealing (SA) algorithms are developed and evaluated to solve the problem. The performances of the algorithms are compared to the performance of an existing genetic algorithm (GA). The computational results show that the LS and SA algorithms offer solutions that are superior to those offered by the GA.

  • simulation based scheduling for parcel consolidation terminals a comparison of Iterative Improvement and simulated annealing
    Winter Simulation Conference, 2005
    Co-Authors: Douglas L Mcwilliams
    Abstract:

    This research explores the application of a simulation-based scheduling algorithm to generate unload schedules for processing feeder trailers in a parcel consolidation terminal. The study compares the performance of Iterative Improvement and simulated annealing to produce quality schedules. The paper reports the results from a number of experimental test problems.

Cevdet Aykanat - One of the best experts on this subject based on the ideXlab platform.

  • Iterative-Improvement-based heuristics for adaptive scheduling of tasks sharing files on heterogeneous master-slave environments
    IEEE Transactions on Parallel and Distributed Systems, 2006
    Co-Authors: Kamer Kaya, Cevdet Aykanat
    Abstract:

    The scheduling of independent but file-sharing tasks on heterogeneous master-slave platforms has recently found important applications in grid environments. The scheduling heuristics recently proposed for this problem are all constructive in nature and based on a common greedy criterion which depends on the momentary completion time values of the tasks. We show that this greedy decision criterion has shortcomings in exploiting the file-sharing interaction among tasks since completion time values are inadequate to extract the global view of this interaction. We propose a three-phase scheduling approach which involves initial task assignment, refinement, and execution ordering phases. For the refinement phase, we model the target application as a hypergraph and, with an elegant hypergraph-partitioning-like formulation, we propose using Iterative-Improvement-based heuristics for refining the task assignments according to two novel objective functions. Unlike the turnaround time, which is the actual schedule cost, the smoothness of proposed objective functions enables the use of Iterative-Improvement-based heuristics successfully since their effectiveness and efficiency depend on the smoothness of the objective function. Experimental results on a wide range of synthetically generated heterogeneous master-slave frameworks show that the proposed three-phase scheduling approach performs much better than the greedy constructive approach

  • Iterative Improvement based declustering heuristics for multi disk databases
    Information Systems, 2005
    Co-Authors: Mehmet Koyuturk, Cevdet Aykanat
    Abstract:

    Data declustering is an important issue for reducing query response times in multi-disk database systems. In this paper, we propose a declustering method that utilizes the available information on query distribution, data distribution, data-item sizes, and disk capacity constraints. The proposed method exploits the natural correspondence between a data set with a given query distribution and a hypergraph. We define an objective function that exactly represents the aggregate parallel query-response time for the declustering problem and adapt the Iterative-Improvement-based heuristics successfully used in hypergraph partitioning to this objective function. We propose a two-phase algorithm that first obtains an initial K-way declustering by recursively bipartitioning the data set, then applies multiway refinement on this declustering. We provide effective gain models and efficient implementation schemes for both phases. The experimental results on a wide range of realistic data sets show that the proposed method provides a significant performance Improvement compared with the state-of-the-art declustering strategy based on similarity-graph partitioning.