Proving Convergence

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 303 Experts worldwide ranked by ideXlab platform

Anshul Gupta - One of the best experts on this subject based on the ideXlab platform.

  • Revisiting Asynchronous Linear Solvers: Provable Convergence Rate through Randomization
    Journal of the ACM, 2015
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Miranker's [1969] pioneering paper on chaotic relaxation. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to make progress even if not all progress made by other processors has been communicated to them. Historically, the applicability of asynchronous methods for solving linear equations has been limited to certain restricted classes of matrices, such as diagonally dominant matrices. Furthermore, analysis of these methods focused on Proving Convergence in the limit. Comparison of the asynchronous Convergence rate with its synchronous counterpart and its scaling with the number of processors have seldom been studied and are still not well understood. In this article, we propose a randomized shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear and is close to that of the method's synchronous counterpart if the processor count is not excessive relative to the size and sparsity of the matrix. We also present an algorithm for unsymmetric systems and overdetermined least-squares. Our work presents a significant improvement in the applicability of asynchronous linear solvers as well as in their Convergence analysis, and suggests randomization as a key paradigm to serve as a foundation for asynchronous methods.

  • IPDPS - Revisiting Asynchronous Linear Solvers: Provable Convergence Rate through Randomization
    2014 IEEE 28th International Parallel and Distributed Processing Symposium, 2014
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Mir Anker's pioneering 1969 paper. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to make progress even if not all progress made by other processors has been communicated to them. Historically, work on asynchronous methods for solving linear equations focused on Proving Convergence in the limit. %How the rate of Convergence %compares to the rate of Convergence of the synchronous counterparts, %and how this rate scales when the number of processors increase, was %seldom studied and is still not well understood. Comparison of the asynchronous Convergence rate with its synchronous counterpart and its scaling with the number of processors were seldom studied, and are still not well understood. Furthermore, the applicability of these methods was limited to restricted classes of matrices, such as diagonally dominant matrices. We propose a randomized shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear, and is close to that of the method's synchronous counterpart if the processor count is not excessive relative to the size and sparsity of the matrix. Our work presents a significant improvement in Convergence analysis as well as in the applicability of asynchronous linear solvers, and suggests randomization as a key paradigm to serve as a foundation for asynchronous methods.

  • revisiting asynchronous linear solvers provable Convergence rate through randomization
    International Parallel and Distributed Processing Symposium, 2014
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Mir Anker's pioneering 1969 paper. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to make progress even if not all progress made by other processors has been communicated to them. Historically, work on asynchronous methods for solving linear equations focused on Proving Convergence in the limit. %How the rate of Convergence %compares to the rate of Convergence of the synchronous counterparts, %and how this rate scales when the number of processors increase, was %seldom studied and is still not well understood. Comparison of the asynchronous Convergence rate with its synchronous counterpart and its scaling with the number of processors were seldom studied, and are still not well understood. Furthermore, the applicability of these methods was limited to restricted classes of matrices, such as diagonally dominant matrices. We propose a randomized shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear, and is close to that of the method's synchronous counterpart if the processor count is not excessive relative to the size and sparsity of the matrix. Our work presents a significant improvement in Convergence analysis as well as in the applicability of asynchronous linear solvers, and suggests randomization as a key paradigm to serve as a foundation for asynchronous methods.

  • A Randomized Asynchronous Linear Solver with Provable Convergence Rate
    2013
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Miranker published their pioneering paper on chaotic relaxation in 1969. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to work and make progress even if not all progress made by other processors has been communicated to them. Historically, work on asynchronous methods for solving linear equations focused on Proving Convergence in the limit. How the rate of Convergence compares to the rate of Convergence of the synchronous counterparts, and how it scales when the number of processors increase, was seldom studied and is still not well understood. Furthermore, the applicability of these methods was limited to restricted classes of matrices (e.g., diagonally dominant matrices). We propose a shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear and close to that of our method’s synchronous counterpart as long as not too many processors are used (relative to the size and sparsity of the matrix). A key component is randomization, which allows the processors to make guaranteed progress without introducing synchronization. Our analysis shows a Convergence rate that is linear in the condition number of the matrix, and depends on the number of processors and the degree to which the matrix is sparse.

  • Revisiting Asynchronous Linear Solvers: Provable Convergence Rate Through Randomization
    arXiv: Distributed Parallel and Cluster Computing, 2013
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Miranker's pioneering 1969 paper on chaotic relaxation. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to make progress even if not all progress made by other processors has been communicated to them. Historically, the applicability of asynchronous methods for solving linear equations was limited to certain restricted classes of matrices, such as diagonally dominant matrices. Furthermore, analysis of these methods focused on Proving Convergence in the limit. Comparison of the asynchronous Convergence rate with its synchronous counterpart and its scaling with the number of processors were seldom studied, and are still not well understood. In this paper, we propose a randomized shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear, and is close to that of the method's synchronous counterpart if the processor count is not excessive relative to the size and sparsity of the matrix. We also present an algorithm for unsymmetric systems and overdetermined least-squares. Our work presents a significant improvement in the applicability of asynchronous linear solvers as well as in their Convergence analysis, and suggests randomization as a key paradigm to serve as a foundation for asynchronous methods.

Haim Avron - One of the best experts on this subject based on the ideXlab platform.

  • Revisiting Asynchronous Linear Solvers: Provable Convergence Rate through Randomization
    Journal of the ACM, 2015
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Miranker's [1969] pioneering paper on chaotic relaxation. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to make progress even if not all progress made by other processors has been communicated to them. Historically, the applicability of asynchronous methods for solving linear equations has been limited to certain restricted classes of matrices, such as diagonally dominant matrices. Furthermore, analysis of these methods focused on Proving Convergence in the limit. Comparison of the asynchronous Convergence rate with its synchronous counterpart and its scaling with the number of processors have seldom been studied and are still not well understood. In this article, we propose a randomized shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear and is close to that of the method's synchronous counterpart if the processor count is not excessive relative to the size and sparsity of the matrix. We also present an algorithm for unsymmetric systems and overdetermined least-squares. Our work presents a significant improvement in the applicability of asynchronous linear solvers as well as in their Convergence analysis, and suggests randomization as a key paradigm to serve as a foundation for asynchronous methods.

  • IPDPS - Revisiting Asynchronous Linear Solvers: Provable Convergence Rate through Randomization
    2014 IEEE 28th International Parallel and Distributed Processing Symposium, 2014
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Mir Anker's pioneering 1969 paper. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to make progress even if not all progress made by other processors has been communicated to them. Historically, work on asynchronous methods for solving linear equations focused on Proving Convergence in the limit. %How the rate of Convergence %compares to the rate of Convergence of the synchronous counterparts, %and how this rate scales when the number of processors increase, was %seldom studied and is still not well understood. Comparison of the asynchronous Convergence rate with its synchronous counterpart and its scaling with the number of processors were seldom studied, and are still not well understood. Furthermore, the applicability of these methods was limited to restricted classes of matrices, such as diagonally dominant matrices. We propose a randomized shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear, and is close to that of the method's synchronous counterpart if the processor count is not excessive relative to the size and sparsity of the matrix. Our work presents a significant improvement in Convergence analysis as well as in the applicability of asynchronous linear solvers, and suggests randomization as a key paradigm to serve as a foundation for asynchronous methods.

  • revisiting asynchronous linear solvers provable Convergence rate through randomization
    International Parallel and Distributed Processing Symposium, 2014
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Mir Anker's pioneering 1969 paper. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to make progress even if not all progress made by other processors has been communicated to them. Historically, work on asynchronous methods for solving linear equations focused on Proving Convergence in the limit. %How the rate of Convergence %compares to the rate of Convergence of the synchronous counterparts, %and how this rate scales when the number of processors increase, was %seldom studied and is still not well understood. Comparison of the asynchronous Convergence rate with its synchronous counterpart and its scaling with the number of processors were seldom studied, and are still not well understood. Furthermore, the applicability of these methods was limited to restricted classes of matrices, such as diagonally dominant matrices. We propose a randomized shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear, and is close to that of the method's synchronous counterpart if the processor count is not excessive relative to the size and sparsity of the matrix. Our work presents a significant improvement in Convergence analysis as well as in the applicability of asynchronous linear solvers, and suggests randomization as a key paradigm to serve as a foundation for asynchronous methods.

  • A Randomized Asynchronous Linear Solver with Provable Convergence Rate
    2013
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Miranker published their pioneering paper on chaotic relaxation in 1969. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to work and make progress even if not all progress made by other processors has been communicated to them. Historically, work on asynchronous methods for solving linear equations focused on Proving Convergence in the limit. How the rate of Convergence compares to the rate of Convergence of the synchronous counterparts, and how it scales when the number of processors increase, was seldom studied and is still not well understood. Furthermore, the applicability of these methods was limited to restricted classes of matrices (e.g., diagonally dominant matrices). We propose a shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear and close to that of our method’s synchronous counterpart as long as not too many processors are used (relative to the size and sparsity of the matrix). A key component is randomization, which allows the processors to make guaranteed progress without introducing synchronization. Our analysis shows a Convergence rate that is linear in the condition number of the matrix, and depends on the number of processors and the degree to which the matrix is sparse.

  • Revisiting Asynchronous Linear Solvers: Provable Convergence Rate Through Randomization
    arXiv: Distributed Parallel and Cluster Computing, 2013
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Miranker's pioneering 1969 paper on chaotic relaxation. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to make progress even if not all progress made by other processors has been communicated to them. Historically, the applicability of asynchronous methods for solving linear equations was limited to certain restricted classes of matrices, such as diagonally dominant matrices. Furthermore, analysis of these methods focused on Proving Convergence in the limit. Comparison of the asynchronous Convergence rate with its synchronous counterpart and its scaling with the number of processors were seldom studied, and are still not well understood. In this paper, we propose a randomized shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear, and is close to that of the method's synchronous counterpart if the processor count is not excessive relative to the size and sparsity of the matrix. We also present an algorithm for unsymmetric systems and overdetermined least-squares. Our work presents a significant improvement in the applicability of asynchronous linear solvers as well as in their Convergence analysis, and suggests randomization as a key paradigm to serve as a foundation for asynchronous methods.

Alex Druinsky - One of the best experts on this subject based on the ideXlab platform.

  • Revisiting Asynchronous Linear Solvers: Provable Convergence Rate through Randomization
    Journal of the ACM, 2015
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Miranker's [1969] pioneering paper on chaotic relaxation. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to make progress even if not all progress made by other processors has been communicated to them. Historically, the applicability of asynchronous methods for solving linear equations has been limited to certain restricted classes of matrices, such as diagonally dominant matrices. Furthermore, analysis of these methods focused on Proving Convergence in the limit. Comparison of the asynchronous Convergence rate with its synchronous counterpart and its scaling with the number of processors have seldom been studied and are still not well understood. In this article, we propose a randomized shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear and is close to that of the method's synchronous counterpart if the processor count is not excessive relative to the size and sparsity of the matrix. We also present an algorithm for unsymmetric systems and overdetermined least-squares. Our work presents a significant improvement in the applicability of asynchronous linear solvers as well as in their Convergence analysis, and suggests randomization as a key paradigm to serve as a foundation for asynchronous methods.

  • IPDPS - Revisiting Asynchronous Linear Solvers: Provable Convergence Rate through Randomization
    2014 IEEE 28th International Parallel and Distributed Processing Symposium, 2014
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Mir Anker's pioneering 1969 paper. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to make progress even if not all progress made by other processors has been communicated to them. Historically, work on asynchronous methods for solving linear equations focused on Proving Convergence in the limit. %How the rate of Convergence %compares to the rate of Convergence of the synchronous counterparts, %and how this rate scales when the number of processors increase, was %seldom studied and is still not well understood. Comparison of the asynchronous Convergence rate with its synchronous counterpart and its scaling with the number of processors were seldom studied, and are still not well understood. Furthermore, the applicability of these methods was limited to restricted classes of matrices, such as diagonally dominant matrices. We propose a randomized shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear, and is close to that of the method's synchronous counterpart if the processor count is not excessive relative to the size and sparsity of the matrix. Our work presents a significant improvement in Convergence analysis as well as in the applicability of asynchronous linear solvers, and suggests randomization as a key paradigm to serve as a foundation for asynchronous methods.

  • revisiting asynchronous linear solvers provable Convergence rate through randomization
    International Parallel and Distributed Processing Symposium, 2014
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Mir Anker's pioneering 1969 paper. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to make progress even if not all progress made by other processors has been communicated to them. Historically, work on asynchronous methods for solving linear equations focused on Proving Convergence in the limit. %How the rate of Convergence %compares to the rate of Convergence of the synchronous counterparts, %and how this rate scales when the number of processors increase, was %seldom studied and is still not well understood. Comparison of the asynchronous Convergence rate with its synchronous counterpart and its scaling with the number of processors were seldom studied, and are still not well understood. Furthermore, the applicability of these methods was limited to restricted classes of matrices, such as diagonally dominant matrices. We propose a randomized shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear, and is close to that of the method's synchronous counterpart if the processor count is not excessive relative to the size and sparsity of the matrix. Our work presents a significant improvement in Convergence analysis as well as in the applicability of asynchronous linear solvers, and suggests randomization as a key paradigm to serve as a foundation for asynchronous methods.

  • A Randomized Asynchronous Linear Solver with Provable Convergence Rate
    2013
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Miranker published their pioneering paper on chaotic relaxation in 1969. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to work and make progress even if not all progress made by other processors has been communicated to them. Historically, work on asynchronous methods for solving linear equations focused on Proving Convergence in the limit. How the rate of Convergence compares to the rate of Convergence of the synchronous counterparts, and how it scales when the number of processors increase, was seldom studied and is still not well understood. Furthermore, the applicability of these methods was limited to restricted classes of matrices (e.g., diagonally dominant matrices). We propose a shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear and close to that of our method’s synchronous counterpart as long as not too many processors are used (relative to the size and sparsity of the matrix). A key component is randomization, which allows the processors to make guaranteed progress without introducing synchronization. Our analysis shows a Convergence rate that is linear in the condition number of the matrix, and depends on the number of processors and the degree to which the matrix is sparse.

  • Revisiting Asynchronous Linear Solvers: Provable Convergence Rate Through Randomization
    arXiv: Distributed Parallel and Cluster Computing, 2013
    Co-Authors: Haim Avron, Alex Druinsky, Anshul Gupta
    Abstract:

    Asynchronous methods for solving systems of linear equations have been researched since Chazan and Miranker's pioneering 1969 paper on chaotic relaxation. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to make progress even if not all progress made by other processors has been communicated to them. Historically, the applicability of asynchronous methods for solving linear equations was limited to certain restricted classes of matrices, such as diagonally dominant matrices. Furthermore, analysis of these methods focused on Proving Convergence in the limit. Comparison of the asynchronous Convergence rate with its synchronous counterpart and its scaling with the number of processors were seldom studied, and are still not well understood. In this paper, we propose a randomized shared-memory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the Convergence rate and prove that it is linear, and is close to that of the method's synchronous counterpart if the processor count is not excessive relative to the size and sparsity of the matrix. We also present an algorithm for unsymmetric systems and overdetermined least-squares. Our work presents a significant improvement in the applicability of asynchronous linear solvers as well as in their Convergence analysis, and suggests randomization as a key paradigm to serve as a foundation for asynchronous methods.

Nicolas Champagnat - One of the best experts on this subject based on the ideXlab platform.

  • from stochastic individual based models to the canonical equation of adaptive dynamics in one step
    Annals of Applied Probability, 2017
    Co-Authors: Martina Baar, Anton Bovier, Nicolas Champagnat
    Abstract:

    We consider a model for Darwinian evolution in an asexual population with a large but non-constant populations size characterized by a natural birth rate, a logistic death rate modelling competition and a probability of mutation at each birth event. In the present paper, we study the long-term behavior of the system in the limit of large population ($K\rightarrow\infty$) size, rare mutations ($u\rightarrow 0$), and small mutational effects ($\sigma\rightarrow 0$), Proving Convergence to the canonical equation of adaptive dynamics (CEAD). In contrast to earlier works, e.g. by Champagnat and Meleard, we take the three limits simultaneously, i.e. $u=u_K$ and $\sigma=\sigma_K$, tend to zero with $K$, subject to conditions that ensure that the time-scale of birth and death events remains separated from that of successful mutational events. This slows down the dynamics of the microscopic system and leads to serious technical difficulties that requires the use of completely different methods. In particular, we cannot use the law of large numbers on the diverging time needed for fixation to approximate the stochastic system with the corresponding deterministic one. To solve this problem we develop a "stochastic Euler scheme" based on coupling arguments that allows to control the time evolution of the stochastic system over time-scales that diverge with $K$.

  • From stochastic, individual-based models to the canonical equation of adaptive dynamics - In one step
    Annals of Applied Probability, 2017
    Co-Authors: Martina Baar, Anton Bovier, Nicolas Champagnat
    Abstract:

    We consider a model for Darwinian evolution in an asexual population with a large but non-constant populations size characterized by a natural birth rate, a logistic death rate modelling competition and a probability of mutation at each birth event. In the present paper, we study the long-term behavior of the system in the limit of large population ($K\rightarrow\infty$) size, rare mutations ($u\rightarrow 0$), and small mutational effects ($\sigma\rightarrow 0$), Proving Convergence to the canonical equation of adaptive dynamics (CEAD). In contrast to earlier works, e.g. by Champagnat and Méléard, we take the three limits simultaneously, i.e. $u=u_K$ and $\sigma=\sigma_K$, tend to zero with $K$, subject to conditions that ensure that the time-scale of birth and death events remains separated from that of successful mutational events. This slows down the dynamics of the microscopic system and leads to serious technical difficulties that requires the use of completely different methods. In particular, we cannot use the law of large numbers on the diverging time needed for fixation to approximate the stochastic system with the corresponding deterministic one. To solve this problem we develop a "stochastic Euler scheme" based on coupling arguments that allows to control the time evolution of the stochastic system over time-scales that diverge with $K$.

Felix C Gartner - One of the best experts on this subject based on the ideXlab platform.

  • an exercise in Proving Convergence through transfer functions
    International Conference on Distributed Computing Systems, 1999
    Co-Authors: Oliver Theel, Felix C Gartner
    Abstract:

    Self-stabilizing algorithms must fulfill two requirements generally called closure and Convergence. We are interested in the Convergence property and discuss a new method for Proving it. Usually Proving the Convergence of self-stabilizing algorithms requires a well foundedness argument: briefly spoken it involves exhibiting a Convergence function which is shown to decrease with every transition of the algorithm, starting in an illegal state. Devising such a Convergence function can be difficult task, since it must bear in itself the essence of stabilization which lies within the algorithm. We explore how to utilize results from control theory to Proving the stability of self-stabilizing algorithms. We define a simple stabilization task and adapt stability criteria for linear control circuits to construct a self-stabilizing algorithm which solves the task. In contrast to the usual procedure in which finding a Convergence function is an afterthought of algorithm design, our approach can be seen as starting with a Convergence function which is implicitly given through a so-called transfer function. Then, we construct an algorithm around it. It turns out that this methodology seems to adapt well to those settings which are quite difficult to handle by the traditional methodologies of self-stabilization.

  • WSS - An exercise in Proving Convergence through transfer functions
    Proceedings 19th IEEE International Conference on Distributed Computing Systems, 1
    Co-Authors: Oliver Theel, Felix C Gartner
    Abstract:

    Self-stabilizing algorithms must fulfill two requirements generally called closure and Convergence. We are interested in the Convergence property and discuss a new method for Proving it. Usually Proving the Convergence of self-stabilizing algorithms requires a well foundedness argument: briefly spoken it involves exhibiting a Convergence function which is shown to decrease with every transition of the algorithm, starting in an illegal state. Devising such a Convergence function can be difficult task, since it must bear in itself the essence of stabilization which lies within the algorithm. We explore how to utilize results from control theory to Proving the stability of self-stabilizing algorithms. We define a simple stabilization task and adapt stability criteria for linear control circuits to construct a self-stabilizing algorithm which solves the task. In contrast to the usual procedure in which finding a Convergence function is an afterthought of algorithm design, our approach can be seen as starting with a Convergence function which is implicitly given through a so-called transfer function. Then, we construct an algorithm around it. It turns out that this methodology seems to adapt well to those settings which are quite difficult to handle by the traditional methodologies of self-stabilization.