Extrapolation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 471936 Experts worldwide ranked by ideXlab platform

Yoav Y. Schechner - One of the best experts on this subject based on the ideXlab platform.

  • Ultrawide Foveated Video Extrapolation
    IEEE Journal of Selected Topics in Signal Processing, 2011
    Co-Authors: Tamar Avraham, Yoav Y. Schechner
    Abstract:

    Consider the task of creating a very wide visual Extrapolation, i.e., a synthetic continuation of the field of view much beyond the acquired data. Existing related methods deal mainly with filling in holes in images and video. These methods are very time consuming and often prone to noticeable artifacts. The probability for artifacts grows as the synthesized regions become more distant from the domain of the raw video. Therefore, such methods do not lend themselves easily to very large Extrapolations. We suggest an approach to enable this task. First, an improved completion algorithm that rejects peripheral distractions significantly reduces attention-drawing artifacts. Second, a foveated video Extrapolation approach exploits weaknesses of the human visual system, in order to enable efficient Extrapolation of video, while further reducing attention-drawing artifacts. Consider a screen showing the raw video. Let the region beyond the raw video domain reside outside the field corresponding to the viewer's fovea. Then, the farther the extrapolated synthetic region is from the raw field of view, the more the spatial resolution can be reduced. This enables image synthesis using spatial blocks that become gradually coarser and significantly fewer (per unit area), as the extrapolated region expands. The substantial reduction in the number of synthesized blocks notably speeds the process and increases the probability of success without distracting artifacts. Furthermore, results supporting the foveated approach are obtained by a user study.

  • ICCP - Multiscale ultrawide foveated video Extrapolation
    2011 IEEE International Conference on Computational Photography (ICCP), 2011
    Co-Authors: Amit Aides, Tamar Avraham, Yoav Y. Schechner
    Abstract:

    Video Extrapolation is the task of extending a video beyond its original field of view. Extrapolating video in a manner that is consistent with the original video and visually pleasing is difficult. In this work we aim at very wide video Extrapolation which increases the complexity of the task. Some video Extrapolation methods simplify the task by using a rough color Extrapolation. A recent approach focuses on artifact avoidance and run time reduction using foveated video Extrapolation, but fails to preserve the structure of the scene. This paper introduces a multi-scale method which combines a coarse to fine approach with foveated video Extrapolation. Foveated video Extrapolation reduces the effective number of pixels that need to be extrapolated, making the Extrapolation less time consuming and less prone to artifacts. The coarse to fine approach better preserves the structure of the scene while preserving finer details near the domain of the input video. The combined method gains improvement both visually and in processing time.

Tamar Avraham - One of the best experts on this subject based on the ideXlab platform.

  • Ultrawide Foveated Video Extrapolation
    IEEE Journal of Selected Topics in Signal Processing, 2011
    Co-Authors: Tamar Avraham, Yoav Y. Schechner
    Abstract:

    Consider the task of creating a very wide visual Extrapolation, i.e., a synthetic continuation of the field of view much beyond the acquired data. Existing related methods deal mainly with filling in holes in images and video. These methods are very time consuming and often prone to noticeable artifacts. The probability for artifacts grows as the synthesized regions become more distant from the domain of the raw video. Therefore, such methods do not lend themselves easily to very large Extrapolations. We suggest an approach to enable this task. First, an improved completion algorithm that rejects peripheral distractions significantly reduces attention-drawing artifacts. Second, a foveated video Extrapolation approach exploits weaknesses of the human visual system, in order to enable efficient Extrapolation of video, while further reducing attention-drawing artifacts. Consider a screen showing the raw video. Let the region beyond the raw video domain reside outside the field corresponding to the viewer's fovea. Then, the farther the extrapolated synthetic region is from the raw field of view, the more the spatial resolution can be reduced. This enables image synthesis using spatial blocks that become gradually coarser and significantly fewer (per unit area), as the extrapolated region expands. The substantial reduction in the number of synthesized blocks notably speeds the process and increases the probability of success without distracting artifacts. Furthermore, results supporting the foveated approach are obtained by a user study.

  • ICCP - Multiscale ultrawide foveated video Extrapolation
    2011 IEEE International Conference on Computational Photography (ICCP), 2011
    Co-Authors: Amit Aides, Tamar Avraham, Yoav Y. Schechner
    Abstract:

    Video Extrapolation is the task of extending a video beyond its original field of view. Extrapolating video in a manner that is consistent with the original video and visually pleasing is difficult. In this work we aim at very wide video Extrapolation which increases the complexity of the task. Some video Extrapolation methods simplify the task by using a rough color Extrapolation. A recent approach focuses on artifact avoidance and run time reduction using foveated video Extrapolation, but fails to preserve the structure of the scene. This paper introduces a multi-scale method which combines a coarse to fine approach with foveated video Extrapolation. Foveated video Extrapolation reduces the effective number of pixels that need to be extrapolated, making the Extrapolation less time consuming and less prone to artifacts. The coarse to fine approach better preserves the structure of the scene while preserving finer details near the domain of the input video. The combined method gains improvement both visually and in processing time.

Mario Milman - One of the best experts on this subject based on the ideXlab platform.

Kim J R Rasmussen - One of the best experts on this subject based on the ideXlab platform.

  • full range stress strain curves for stainless steel alloys
    Journal of Constructional Steel Research, 2003
    Co-Authors: Kim J R Rasmussen
    Abstract:

    Abstract The paper develops an expression for the stress–strain curves for stainless steel alloys which is valid over the full strain range. The expression is useful for the design and numerical modelling of stainless steel members and elements which reach stresses beyond the 0.2% proof stress in their ultimate limit state. In this stress range, current stress–strain curves based on the Ramberg–Osgood expression become seriously inaccurate principally because they are Extrapolations of curve fits to stresses lower than the 0.2% proof stress. The Extrapolation becomes particularly inaccurate for alloys with pronounced strain hardening. The paper also develops expressions for determining the ultimate tensile strength ( σ u ) and strain ( ϵ u ) for given values of the Ramberg–Osgood parameters ( E 0 , σ 0.2 , n ). The expressions are compared with a wide range of experimental data and shown to be reasonably accurate for all structural classes of stainless steel alloys. Based on the expressions for σ u and ϵ u , it is possible to construct the entire stress–strain curve from the Ramberg–Osgood parameters ( E 0 , σ 0.2 , n ).

  • full range stress strain curves for stainless steel alloys
    Journal of Constructional Steel Research, 2003
    Co-Authors: Kim J R Rasmussen
    Abstract:

    Abstract The paper develops an expression for the stress–strain curves for stainless steel alloys which is valid over the full strain range. The expression is useful for the design and numerical modelling of stainless steel members and elements which reach stresses beyond the 0.2% proof stress in their ultimate limit state. In this stress range, current stress–strain curves based on the Ramberg–Osgood expression become seriously inaccurate principally because they are Extrapolations of curve fits to stresses lower than the 0.2% proof stress. The Extrapolation becomes particularly inaccurate for alloys with pronounced strain hardening. The paper also develops expressions for determining the ultimate tensile strength ( σ u ) and strain ( ϵ u ) for given values of the Ramberg–Osgood parameters ( E 0 , σ 0.2 , n ). The expressions are compared with a wide range of experimental data and shown to be reasonably accurate for all structural classes of stainless steel alloys. Based on the expressions for σ u and ϵ u , it is possible to construct the entire stress–strain curve from the Ramberg–Osgood parameters ( E 0 , σ 0.2 , n ).

Amit Aides - One of the best experts on this subject based on the ideXlab platform.

  • ICCP - Multiscale ultrawide foveated video Extrapolation
    2011 IEEE International Conference on Computational Photography (ICCP), 2011
    Co-Authors: Amit Aides, Tamar Avraham, Yoav Y. Schechner
    Abstract:

    Video Extrapolation is the task of extending a video beyond its original field of view. Extrapolating video in a manner that is consistent with the original video and visually pleasing is difficult. In this work we aim at very wide video Extrapolation which increases the complexity of the task. Some video Extrapolation methods simplify the task by using a rough color Extrapolation. A recent approach focuses on artifact avoidance and run time reduction using foveated video Extrapolation, but fails to preserve the structure of the scene. This paper introduces a multi-scale method which combines a coarse to fine approach with foveated video Extrapolation. Foveated video Extrapolation reduces the effective number of pixels that need to be extrapolated, making the Extrapolation less time consuming and less prone to artifacts. The coarse to fine approach better preserves the structure of the scene while preserving finer details near the domain of the input video. The combined method gains improvement both visually and in processing time.