The Experts below are selected from a list of 327 Experts worldwide ranked by ideXlab platform
Mouez Dimassi - One of the best experts on this subject based on the ideXlab platform.
-
Spectral shift function and resonances for slowly varying perturbations of periodic Schrödinger operators
Journal of Functional Analysis, 2005Co-Authors: Mouez DimassiAbstract:We study the spectral shift function s(λ,h) and the resonances of the operator P(h)=-Δ+V(x)+W(hx). Here V is a periodic potential, W a decreasing perturbation and h a Small Positive Constant. We give a representation of the derivative of s(λ,h) related to the resonances of P(h), and we obtain a Weyl-type asymptotics of s(λ,h). We establish an upper bound O(h-n+1) for the number of the resonances of P(h) lying in a disk of radius h.
-
Resonances for Slowly Varying Perturbations of a Periodic Schrödinger Operator
Canadian Journal of Mathematics, 2002Co-Authors: Mouez DimassiAbstract:We study the resonances of the operator P(h) =−�x + V(x) +'(hx). Here V is a periodic potential,' a decreasing perturbation and h a Small Positive Constant. We prove the existence of shape resonances near the edges of the spectral bands of P0 = −�x + V(x), and we give its asymptotic expansions in powers of h 1 2 .
Pankaj K. Agarwal - One of the best experts on this subject based on the ideXlab platform.
-
Approximating Shortest Paths on a Nonconvex Polyhedron
SIAM Journal on Computing, 2000Co-Authors: Kasturi Varadarajan, Pankaj K. AgarwalAbstract:We present an approximation algorithm that, given the boundary P of a simple, nonconvex polyhedron in ${\mathbb R}^3$ and two points s and t on P, constructs a path on P between s and t whose length is at most ${7(1+{\varepsilon})} dP(s,t), where dP(s,t) is the length of the shortest path between s and t on P, and ${\varepsilon} > 0$ is an arbitrarily Small Positive Constant. The algorithm runs in O(n5/3 log5/3 n) time, where n is the number of vertices in P. We also present a slightly faster algorithm that runs in O(n8/5 log8/5 n) time and returns a path whose length is at most ${15(1+{\varepsilon})} d_{P}(s,t)$.
-
Linear approximation of simple objects
Information Processing Letters, 1997Co-Authors: Kasturi Varadarajan, Pankaj K. AgarwalAbstract:Abstract Let P = P 1 , P 2 , …, P m be a set of m convex polygons in the plane with a total number of n vertices, and for 1 ⩽ i ⩽ m , let w i gE R + be a weight associated with P i . The weighted distance between a line l and a polygon P i is given by d ( l , P i ) = min pϵP i , qϵl d ( p , q ). w i , where d ( p , q ) is the Euclidean distance between p and q . We want to compute a line l that minimizes the maximum distance between l and the polygons of P . We present an O( nα ( n ) log 3 n )-time algorithm to compute such a line. We also give an O( n 2+ e )-time algorithm, where e is an arbitrarily Small Positive Constant, to solve the three dimensional version of this problem; here, P is a set of convex polytopes in R 3 , and we want to compute a plane h that minimizes the maximum weighted distance between h and the polytopes.
-
Intersection queries in sets of disks
BIT Numerical Mathematics, 1992Co-Authors: Marc Van Kreveld, Mark H. Overmars, Pankaj K. AgarwalAbstract:In this paper we develop some new data structures for storing a set of disks that can answer different types of intersection queries efficiency. If the disks are non-intersecting we obtain a linear size data structure that can report allk disks intersecting a query line segment in timeO(n β+e +k), wheren is the number of disks,β=log2(1+√5)−1 ≈ 0.695, and e is an arbitrarily Small Positive Constant. If the segment is a full line, the query time becomesO(n β +k). For intersecting disks we obtain anO(n logn) size data structure that can answer an intersection query in timeO(n 2/3 log2 n+k). We also present a linear size data structure for ray shooting queries, whose query time isO(n β ).
-
FOCS - Approximating shortest paths on a nonconvex polyhedron
Proceedings 38th Annual Symposium on Foundations of Computer Science, 1Co-Authors: Kasturi Varadarajan, Pankaj K. AgarwalAbstract:We present an approximation algorithm that, given the boundary P of a simple, nonconvex polyhedron in R/sup 3/, and two points s and t on P, constructs a path on P between s and t whose length is at most 7(1+/spl epsi/)d/sub P/(s,t), where d/sub P/(s,t) is the length of the shortest path between s and t on P, and /spl epsi/>0 is an arbitrarily Small Positive Constant. The algorithm runs in O(n/sup 5/3/ log/sup 5/3/ n) time, where n is the number of vertices in P. We also present a slightly faster algorithm that runs in O(n/sup 8/5/ log/sup 8/5/ n) time and returns a path whose length is at most 15(1+/spl epsi/)d/sub P/(s,t).
Xu Jin - One of the best experts on this subject based on the ideXlab platform.
-
fault tolerant iterative learning control for mobile robots non repetitive trajectory tracking with output constraints
Automatica, 2018Co-Authors: Xu JinAbstract:In this brief, we develop a novel iterative learning control (ILC) algorithm to deal with trajectory tracking problems for a class of unicycle-type mobile robots with two actuated wheels that are subject to actuator faults. Unlike most of the ILC literature that requires identical reference trajectories over the iteration domain, the desired trajectories in this work can be iteration dependent, and the initial position of the robot in each iteration can also be random. The mass and inertia property of the robot and wheels can be unknown and iteration dependent. Barrier Lyapunov functions are used in the analysis to guarantee satisfaction of constraint requirements, feasibility of the controller, and prescribed tracking performance. We show that under the proposed algorithm, the distance and angle tracking errors can uniformly converge to an arbitrarily Small Positive Constant and zero, respectively, over the iteration domain, beyond a Small initial time interval in each iteration. A numerical simulation is presented in the end to demonstrate the efficacy of the proposed algorithm.
Kasturi Varadarajan - One of the best experts on this subject based on the ideXlab platform.
-
Approximating Shortest Paths on a Nonconvex Polyhedron
SIAM Journal on Computing, 2000Co-Authors: Kasturi Varadarajan, Pankaj K. AgarwalAbstract:We present an approximation algorithm that, given the boundary P of a simple, nonconvex polyhedron in ${\mathbb R}^3$ and two points s and t on P, constructs a path on P between s and t whose length is at most ${7(1+{\varepsilon})} dP(s,t), where dP(s,t) is the length of the shortest path between s and t on P, and ${\varepsilon} > 0$ is an arbitrarily Small Positive Constant. The algorithm runs in O(n5/3 log5/3 n) time, where n is the number of vertices in P. We also present a slightly faster algorithm that runs in O(n8/5 log8/5 n) time and returns a path whose length is at most ${15(1+{\varepsilon})} d_{P}(s,t)$.
-
Linear approximation of simple objects
Information Processing Letters, 1997Co-Authors: Kasturi Varadarajan, Pankaj K. AgarwalAbstract:Abstract Let P = P 1 , P 2 , …, P m be a set of m convex polygons in the plane with a total number of n vertices, and for 1 ⩽ i ⩽ m , let w i gE R + be a weight associated with P i . The weighted distance between a line l and a polygon P i is given by d ( l , P i ) = min pϵP i , qϵl d ( p , q ). w i , where d ( p , q ) is the Euclidean distance between p and q . We want to compute a line l that minimizes the maximum distance between l and the polygons of P . We present an O( nα ( n ) log 3 n )-time algorithm to compute such a line. We also give an O( n 2+ e )-time algorithm, where e is an arbitrarily Small Positive Constant, to solve the three dimensional version of this problem; here, P is a set of convex polytopes in R 3 , and we want to compute a plane h that minimizes the maximum weighted distance between h and the polytopes.
-
FOCS - Approximating shortest paths on a nonconvex polyhedron
Proceedings 38th Annual Symposium on Foundations of Computer Science, 1Co-Authors: Kasturi Varadarajan, Pankaj K. AgarwalAbstract:We present an approximation algorithm that, given the boundary P of a simple, nonconvex polyhedron in R/sup 3/, and two points s and t on P, constructs a path on P between s and t whose length is at most 7(1+/spl epsi/)d/sub P/(s,t), where d/sub P/(s,t) is the length of the shortest path between s and t on P, and /spl epsi/>0 is an arbitrarily Small Positive Constant. The algorithm runs in O(n/sup 5/3/ log/sup 5/3/ n) time, where n is the number of vertices in P. We also present a slightly faster algorithm that runs in O(n/sup 8/5/ log/sup 8/5/ n) time and returns a path whose length is at most 15(1+/spl epsi/)d/sub P/(s,t).
Alexander J. Zaslavski - One of the best experts on this subject based on the ideXlab platform.
-
A Zero-Sum Game with Two Players
Convex Optimization with Computational Errors, 2020Co-Authors: Alexander J. ZaslavskiAbstract:In this chapter we study an algorithm for finding a saddle point of a zero-sum game with two players. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient while the second one is a proximal gradient step. In each of these two steps there is a computational error. In general, these two computational errors are different. We show that our algorithm generates a good approximate solution, if all the computational errors are bounded from above by a Small Positive Constant. Moreover, if we know the computational errors for the two steps of our algorithm, we find out what approximate solution can be obtained and how many iterates one needs for this.
-
An Optimization Problems with a Composite Objective Function
Convex Optimization with Computational Errors, 2020Co-Authors: Alexander J. ZaslavskiAbstract:In this chapter we study an algorithm for minimization of the sum of two functions, the first one being smooth and convex and the second being convex. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the first function while the second one is a proximal gradient step for the second function. In each of these two steps there is a computational error. In general, these two computational errors are different. We show that our algorithm generates a good approximate solution, if all the computational errors are bounded from above by a Small Positive Constant. Moreover, if we know the computational errors for the two steps of our algorithm, we find out what approximate solution can be obtained and how many iterates one needs for this.
-
Minimization of Quasiconvex Functions
Convex Optimization with Computational Errors, 2020Co-Authors: Alexander J. ZaslavskiAbstract:In this chapter we study minimization of a quasiconvex function. Our algorithm has two steps. In each of these two steps there is a computational error. In general, these two computational errors are different. We show that our algorithm generates a good approximate solution, if all the computational errors are bounded from above by a Small Positive Constant. Moreover, if we know the computational errors for the two steps of our algorithm, we find out what approximate solution can be obtained and how many iterates one needs for this.
-
PDA-Based Method for Convex Optimization
Convex Optimization with Computational Errors, 2020Co-Authors: Alexander J. ZaslavskiAbstract:In this chapter we use predicted decrease approximation (PDA) for constrained convex optimization. For PDA-based method each iteration consists of two steps. In each of these two steps there is a computational error. In general, these two computational errors are different. We show that our algorithm generates a good approximate solution, if all the computational errors are bounded from above by a Small Positive Constant. Moreover, if we know the computational errors for the two steps of our algorithm, we find out what approximate solution can be obtained and how many iterates one needs for this.
-
Weiszfeld’s Method
Springer Optimization and Its Applications, 2016Co-Authors: Alexander J. ZaslavskiAbstract:In this chapter we analyze the behavior of Weiszfeld’s method for solving the Fermat–Weber location problem. We show that the algorithm generates a good approximate solution, if computational errors are bounded from above by a Small Positive Constant. Moreover, for a known computational error, we find out what an approximate solution can be obtained and how many iterates one needs for this.