The Experts below are selected from a list of 471936 Experts worldwide ranked by ideXlab platform
Yoav Y. Schechner - One of the best experts on this subject based on the ideXlab platform.
-
Ultrawide Foveated Video Extrapolation
IEEE Journal of Selected Topics in Signal Processing, 2011Co-Authors: Tamar Avraham, Yoav Y. SchechnerAbstract:Consider the task of creating a very wide visual Extrapolation, i.e., a synthetic continuation of the field of view much beyond the acquired data. Existing related methods deal mainly with filling in holes in images and video. These methods are very time consuming and often prone to noticeable artifacts. The probability for artifacts grows as the synthesized regions become more distant from the domain of the raw video. Therefore, such methods do not lend themselves easily to very large Extrapolations. We suggest an approach to enable this task. First, an improved completion algorithm that rejects peripheral distractions significantly reduces attention-drawing artifacts. Second, a foveated video Extrapolation approach exploits weaknesses of the human visual system, in order to enable efficient Extrapolation of video, while further reducing attention-drawing artifacts. Consider a screen showing the raw video. Let the region beyond the raw video domain reside outside the field corresponding to the viewer's fovea. Then, the farther the extrapolated synthetic region is from the raw field of view, the more the spatial resolution can be reduced. This enables image synthesis using spatial blocks that become gradually coarser and significantly fewer (per unit area), as the extrapolated region expands. The substantial reduction in the number of synthesized blocks notably speeds the process and increases the probability of success without distracting artifacts. Furthermore, results supporting the foveated approach are obtained by a user study.
-
ICCP - Multiscale ultrawide foveated video Extrapolation
2011 IEEE International Conference on Computational Photography (ICCP), 2011Co-Authors: Amit Aides, Tamar Avraham, Yoav Y. SchechnerAbstract:Video Extrapolation is the task of extending a video beyond its original field of view. Extrapolating video in a manner that is consistent with the original video and visually pleasing is difficult. In this work we aim at very wide video Extrapolation which increases the complexity of the task. Some video Extrapolation methods simplify the task by using a rough color Extrapolation. A recent approach focuses on artifact avoidance and run time reduction using foveated video Extrapolation, but fails to preserve the structure of the scene. This paper introduces a multi-scale method which combines a coarse to fine approach with foveated video Extrapolation. Foveated video Extrapolation reduces the effective number of pixels that need to be extrapolated, making the Extrapolation less time consuming and less prone to artifacts. The coarse to fine approach better preserves the structure of the scene while preserving finer details near the domain of the input video. The combined method gains improvement both visually and in processing time.
Tamar Avraham - One of the best experts on this subject based on the ideXlab platform.
-
Ultrawide Foveated Video Extrapolation
IEEE Journal of Selected Topics in Signal Processing, 2011Co-Authors: Tamar Avraham, Yoav Y. SchechnerAbstract:Consider the task of creating a very wide visual Extrapolation, i.e., a synthetic continuation of the field of view much beyond the acquired data. Existing related methods deal mainly with filling in holes in images and video. These methods are very time consuming and often prone to noticeable artifacts. The probability for artifacts grows as the synthesized regions become more distant from the domain of the raw video. Therefore, such methods do not lend themselves easily to very large Extrapolations. We suggest an approach to enable this task. First, an improved completion algorithm that rejects peripheral distractions significantly reduces attention-drawing artifacts. Second, a foveated video Extrapolation approach exploits weaknesses of the human visual system, in order to enable efficient Extrapolation of video, while further reducing attention-drawing artifacts. Consider a screen showing the raw video. Let the region beyond the raw video domain reside outside the field corresponding to the viewer's fovea. Then, the farther the extrapolated synthetic region is from the raw field of view, the more the spatial resolution can be reduced. This enables image synthesis using spatial blocks that become gradually coarser and significantly fewer (per unit area), as the extrapolated region expands. The substantial reduction in the number of synthesized blocks notably speeds the process and increases the probability of success without distracting artifacts. Furthermore, results supporting the foveated approach are obtained by a user study.
-
ICCP - Multiscale ultrawide foveated video Extrapolation
2011 IEEE International Conference on Computational Photography (ICCP), 2011Co-Authors: Amit Aides, Tamar Avraham, Yoav Y. SchechnerAbstract:Video Extrapolation is the task of extending a video beyond its original field of view. Extrapolating video in a manner that is consistent with the original video and visually pleasing is difficult. In this work we aim at very wide video Extrapolation which increases the complexity of the task. Some video Extrapolation methods simplify the task by using a rough color Extrapolation. A recent approach focuses on artifact avoidance and run time reduction using foveated video Extrapolation, but fails to preserve the structure of the scene. This paper introduces a multi-scale method which combines a coarse to fine approach with foveated video Extrapolation. Foveated video Extrapolation reduces the effective number of pixels that need to be extrapolated, making the Extrapolation less time consuming and less prone to artifacts. The coarse to fine approach better preserves the structure of the scene while preserving finer details near the domain of the input video. The combined method gains improvement both visually and in processing time.
Mario Milman - One of the best experts on this subject based on the ideXlab platform.
-
Extrapolation methods and Rubio de Francia's Extrapolation theorem☆
Advances in Mathematics, 2006Co-Authors: Joaquim Martín, Mario MilmanAbstract:We develop a general framework to study Extrapolation of inequalities. © 2005 Elsevier Inc. All rights reserved. MSC: primary 47B38; 46M35; secondary 42B25; 42B20
-
Extrapolation and Optimal Decompositions: with Applications to Analysis
1994Co-Authors: Mario MilmanAbstract:Background on Extrapolation theory.- K/J inequalities and limiting embedding theorems.- Calculations with the ? method and applications.- Bilinear Extrapolation and a limiting case of a theorem by Cwikel.- Extrapolation, reiteration, and applications.- Estimates for commutators in real interpolation.- Sobolev imbedding theorems and Extrapolation of infinitely many operators.- Some remarks on Extrapolation spaces and abstract parabolic equations.- Optimal decompositions, scales, and Nash-Moser iteration.
Kim J R Rasmussen - One of the best experts on this subject based on the ideXlab platform.
-
full range stress strain curves for stainless steel alloys
Journal of Constructional Steel Research, 2003Co-Authors: Kim J R RasmussenAbstract:Abstract The paper develops an expression for the stress–strain curves for stainless steel alloys which is valid over the full strain range. The expression is useful for the design and numerical modelling of stainless steel members and elements which reach stresses beyond the 0.2% proof stress in their ultimate limit state. In this stress range, current stress–strain curves based on the Ramberg–Osgood expression become seriously inaccurate principally because they are Extrapolations of curve fits to stresses lower than the 0.2% proof stress. The Extrapolation becomes particularly inaccurate for alloys with pronounced strain hardening. The paper also develops expressions for determining the ultimate tensile strength ( σ u ) and strain ( ϵ u ) for given values of the Ramberg–Osgood parameters ( E 0 , σ 0.2 , n ). The expressions are compared with a wide range of experimental data and shown to be reasonably accurate for all structural classes of stainless steel alloys. Based on the expressions for σ u and ϵ u , it is possible to construct the entire stress–strain curve from the Ramberg–Osgood parameters ( E 0 , σ 0.2 , n ).
-
full range stress strain curves for stainless steel alloys
Journal of Constructional Steel Research, 2003Co-Authors: Kim J R RasmussenAbstract:Abstract The paper develops an expression for the stress–strain curves for stainless steel alloys which is valid over the full strain range. The expression is useful for the design and numerical modelling of stainless steel members and elements which reach stresses beyond the 0.2% proof stress in their ultimate limit state. In this stress range, current stress–strain curves based on the Ramberg–Osgood expression become seriously inaccurate principally because they are Extrapolations of curve fits to stresses lower than the 0.2% proof stress. The Extrapolation becomes particularly inaccurate for alloys with pronounced strain hardening. The paper also develops expressions for determining the ultimate tensile strength ( σ u ) and strain ( ϵ u ) for given values of the Ramberg–Osgood parameters ( E 0 , σ 0.2 , n ). The expressions are compared with a wide range of experimental data and shown to be reasonably accurate for all structural classes of stainless steel alloys. Based on the expressions for σ u and ϵ u , it is possible to construct the entire stress–strain curve from the Ramberg–Osgood parameters ( E 0 , σ 0.2 , n ).
Amit Aides - One of the best experts on this subject based on the ideXlab platform.
-
ICCP - Multiscale ultrawide foveated video Extrapolation
2011 IEEE International Conference on Computational Photography (ICCP), 2011Co-Authors: Amit Aides, Tamar Avraham, Yoav Y. SchechnerAbstract:Video Extrapolation is the task of extending a video beyond its original field of view. Extrapolating video in a manner that is consistent with the original video and visually pleasing is difficult. In this work we aim at very wide video Extrapolation which increases the complexity of the task. Some video Extrapolation methods simplify the task by using a rough color Extrapolation. A recent approach focuses on artifact avoidance and run time reduction using foveated video Extrapolation, but fails to preserve the structure of the scene. This paper introduces a multi-scale method which combines a coarse to fine approach with foveated video Extrapolation. Foveated video Extrapolation reduces the effective number of pixels that need to be extrapolated, making the Extrapolation less time consuming and less prone to artifacts. The coarse to fine approach better preserves the structure of the scene while preserving finer details near the domain of the input video. The combined method gains improvement both visually and in processing time.