Temporal Filtering

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 23559 Experts worldwide ranked by ideXlab platform

Deepak S Turaga - One of the best experts on this subject based on the ideXlab platform.

  • unconstrained motion compensated Temporal Filtering umctf for efficient and flexible interframe wavelet video coding
    Signal Processing-image Communication, 2005
    Co-Authors: Deepak S Turaga, Yiannis Andreopoulos, Adrian Munteanu, M. Van Der Schaar, Peter Schelkens
    Abstract:

    We introduce an efficient and flexible framework for Temporal Filtering in wavelet-based scalable video codecs called unconstrained motion compensated Temporal Filtering (UMCTF). UMCTF allows for the use of different filters and Temporal decomposition structures through a set of controlling parameters that may be easily modified during the coding process, at different granularities and levels. The proposed framework enables the adaptation of the coding process to the video content, network and end-device characteristics, allows for enhanced scalability, content-adaptivity and reduced delay, while improving the coding efficiency as compared to state-of-the-art motion-compensated wavelet video coders. Additionally, a mechanism for the control of the distortion variation in video coding based on UMCTF employing only the predict step is proposed. The control mechanism is formulated by expressing the distortion in an arbitrary decoded frame, at any Temporal level in the pyramid, as a function of the distortions in the reference frames at the same Temporal level. All the different scenarios proposed in the paper are experimentally validated through a coding scheme that incorporates advanced features (such as rate-distortion optimized variable block-size multihypothesis prediction and overlapped block motion compensation). Experiments are carried out to determine the relative efficiency of different UMCTF instantiations, as well as to compare against the current state-of-the-art in video coding.

  • multiple description scalable coding using wavelet based motion compensated Temporal Filtering
    International Conference on Image Processing, 2003
    Co-Authors: M Van Der Schaar, Deepak S Turaga
    Abstract:

    Packet delay jitter and loss due to network congestion pose significant challenges for designing and deploying delay sensitive multimedia applications over the best effort packet switched networks such as the Internet. Recent studies indicate that using multiple descriptions coding (MDC) in conjunction with path or server diversity can mitigate these effects. However, the proposed MDC coding and streaming techniques are based on non-scalable coding techniques. A key disadvantages of these techniques is that they can only improve the error resilience of the transmitted video, but are not able to address two other important challenges associated with the robust transmission of video over unreliable networks: adaptation to bandwidth variations and receiving device characteristics. In this paper, we present a new paradigm, referred to as multiple description scalable coding (MDSC), that is able to address all the previously mentioned challenges by combining the advantages of scalable coding and MDC. This framework enables tradeoffs between throughput, redundancy and complexity at transmission time, unlike previous non-scalable MDC schemes. Furthermore, we also propose a novel MDSC scheme based on motion compensated Temporal Filtering (MCTF), denominated multiple description motion compensated Temporal Filtering (MD-MCTF). We use the inherent ability of current MCTF schemes, using the lifting implementation of Temporal Filtering. We show how tradeoffs between throughput, redundancy and complexity can easily be achieved by adaptively partitioning the video into several descriptions after MCTF. Based on our simulations using different network conditions, the proposed MD-MCTF framework outperforms existing MDC schemes over a variety of network conditions.

  • unconstrained motion compensated Temporal Filtering umctf framework for wavelet video coding
    International Conference on Acoustics Speech and Signal Processing, 2003
    Co-Authors: M Van Der Schaar, Deepak S Turaga
    Abstract:

    This paper presents a new framework for adaptive Temporal Filtering in wavelet interframe codecs, called the unconstrained motion compensated Temporal Filtering (UMCTF). This framework allows flexible and efficient Temporal Filtering by combining the best features of motion compensation, used in predictive coding, with the advantages of interframe scalable wavelet video coding schemes. UMCTF provides higher coding efficiency, improved visual quality and flexibility of Temporal and spatial scalability, higher coding efficiency and tower decoding delay than conventional MCTF schemes. Furthermore, UMCTF can also be employed in alternative open-loop scalable coding frameworks using DCT for the texture coding.

  • UNCONSTRAINED MOTION COMPENSATED Temporal Filtering (UMCTF)
    2003
    Co-Authors: M Van Der Schaar, Deepak S Turaga
    Abstract:

    Ris'paper presents a new framework for adaptive remporal filrering in waveler inrerframe coders, called the unconstrained morion compensared Temporal Filtering (UMCTF). 7his framework allows flexible and eficienr remporal Filtering by combining rhe besr features of morion compensarion, used in predictive coding, with the advantages of inrerframe scalable waveler video coding schemes. UMCTF provides higher coding eficiency,.iniproved visual quality and flexibiliry of remporal and sparial scalabiliry, higher coding eficiency and lower decoding delay rhan conventional MCTF schemes. Funhermore, UMCTF can also be employed in alremarive open-loop scalable coding frameworks using DCT for rhe rexrure coding.

  • ICIP (3) - Multiple description scalable coding using wavelet-based motion compensated Temporal Filtering
    Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429), 1
    Co-Authors: M Van Der Schaar, Deepak S Turaga
    Abstract:

    Packet delay jitter and loss due to network congestion pose significant challenges for designing and deploying delay sensitive multimedia applications over the best effort packet switched networks such as the Internet. Recent studies indicate that using multiple descriptions coding (MDC) in conjunction with path or server diversity can mitigate these effects. However, the proposed MDC coding and streaming techniques are based on non-scalable coding techniques. A key disadvantages of these techniques is that they can only improve the error resilience of the transmitted video, but are not able to address two other important challenges associated with the robust transmission of video over unreliable networks: adaptation to bandwidth variations and receiving device characteristics. In this paper, we present a new paradigm, referred to as multiple description scalable coding (MDSC), that is able to address all the previously mentioned challenges by combining the advantages of scalable coding and MDC. This framework enables tradeoffs between throughput, redundancy and complexity at transmission time, unlike previous non-scalable MDC schemes. Furthermore, we also propose a novel MDSC scheme based on motion compensated Temporal Filtering (MCTF), denominated multiple description motion compensated Temporal Filtering (MD-MCTF). We use the inherent ability of current MCTF schemes, using the lifting implementation of Temporal Filtering. We show how tradeoffs between throughput, redundancy and complexity can easily be achieved by adaptively partitioning the video into several descriptions after MCTF. Based on our simulations using different network conditions, the proposed MD-MCTF framework outperforms existing MDC schemes over a variety of network conditions.

M Van Der Schaar - One of the best experts on this subject based on the ideXlab platform.

  • In-band motion compensated Temporal Filtering
    Signal Processing: Image Communication, 2004
    Co-Authors: Yiannis Andreopoulos, M Van Der Schaar, Adrian Munteanu, Joeri Barbarien, J Cornelis, Peter Schelkens
    Abstract:

    A novel framework for fully scalable video coding that performs open-loop motion-compensated Temporal Filtering (MCTF) in the wavelet domain (in-band) is presented in this paper. Unlike the conventional spatial-domain MCTF (SDMCTF) schemes, which apply MCTF on the original image data and then encode the residuals using the critically sampled discrete wavelet transform (DWT), the proposed framework applies the in-band MCTF (IBMCTF) after the DWT is performed in the spatial dimensions. To overcome the inefficiency of MCTF in the critically-sampled DWT, a complete-to-overcomplete DWT (CODWT) is performed. Recent theoretical findings on the CODWT are reviewed from the application perspective of fully-scalable IBMCTF, and constraints on the transform calculation that allow for fast and seamless resolution-scalable coding are established. Furthermore, inspired by recent work on advanced prediction techniques, an algorithm for optimized multihypothesis Temporal Filtering is proposed in this paper. The application of the proposed algorithm in MCTF-based video coding is demonstrated, and similar improvements as for the multihypothesis prediction algorithms employed in closed-loop video coding are experimentally observed. Experimental instantiations of the proposed IBMCTF and SDMCTF coders with multihypothesis prediction produce single embedded bitstreams, from which subsets are extracted to be compared against the current state-of-the-art in video coding.

  • multiple description scalable coding using wavelet based motion compensated Temporal Filtering
    International Conference on Image Processing, 2003
    Co-Authors: M Van Der Schaar, Deepak S Turaga
    Abstract:

    Packet delay jitter and loss due to network congestion pose significant challenges for designing and deploying delay sensitive multimedia applications over the best effort packet switched networks such as the Internet. Recent studies indicate that using multiple descriptions coding (MDC) in conjunction with path or server diversity can mitigate these effects. However, the proposed MDC coding and streaming techniques are based on non-scalable coding techniques. A key disadvantages of these techniques is that they can only improve the error resilience of the transmitted video, but are not able to address two other important challenges associated with the robust transmission of video over unreliable networks: adaptation to bandwidth variations and receiving device characteristics. In this paper, we present a new paradigm, referred to as multiple description scalable coding (MDSC), that is able to address all the previously mentioned challenges by combining the advantages of scalable coding and MDC. This framework enables tradeoffs between throughput, redundancy and complexity at transmission time, unlike previous non-scalable MDC schemes. Furthermore, we also propose a novel MDSC scheme based on motion compensated Temporal Filtering (MCTF), denominated multiple description motion compensated Temporal Filtering (MD-MCTF). We use the inherent ability of current MCTF schemes, using the lifting implementation of Temporal Filtering. We show how tradeoffs between throughput, redundancy and complexity can easily be achieved by adaptively partitioning the video into several descriptions after MCTF. Based on our simulations using different network conditions, the proposed MD-MCTF framework outperforms existing MDC schemes over a variety of network conditions.

  • unconstrained motion compensated Temporal Filtering umctf framework for wavelet video coding
    International Conference on Acoustics Speech and Signal Processing, 2003
    Co-Authors: M Van Der Schaar, Deepak S Turaga
    Abstract:

    This paper presents a new framework for adaptive Temporal Filtering in wavelet interframe codecs, called the unconstrained motion compensated Temporal Filtering (UMCTF). This framework allows flexible and efficient Temporal Filtering by combining the best features of motion compensation, used in predictive coding, with the advantages of interframe scalable wavelet video coding schemes. UMCTF provides higher coding efficiency, improved visual quality and flexibility of Temporal and spatial scalability, higher coding efficiency and tower decoding delay than conventional MCTF schemes. Furthermore, UMCTF can also be employed in alternative open-loop scalable coding frameworks using DCT for the texture coding.

  • fully scalable wavelet video coding using in band motion compensated Temporal Filtering
    International Conference on Acoustics Speech and Signal Processing, 2003
    Co-Authors: Yiannis Andreopoulos, M Van Der Schaar, Adrian Munteanu, Joeri Barbarien, Peter Schelkens, J Cornelis
    Abstract:

    This paper presents a novel fully-scalable wavelet video coding scheme that performs efficient open-loop motion compensated Temporal Filtering (MCTF) in the wavelet domain (in-band). Unlike the conventional spatial-domain MCTF (SDMCTF) schemes, which apply MCTF on the original image data and then encode the residual image using a critically-sampled wavelet transform, the framework presented here applies the in-band MCTF (IBMCTF) after the discrete wavelet transform (DWT) is performed in the spatial dimensions. To overcome the inefficiency of motion estimation (ME) in the wavelet domain, a complete-to-overcomplete DWT (CODWT) is performed. The proposed framework provides improved quality (SNR) and Temporal scalability as compared with existing in-band closed-loop Temporal prediction schemes with ODWT and improved spatial scalability as compared to SDMCTF. We present a thorough comparison between SDMCTF and the proposed IBMCTF in terms of coding efficiency and scalability. Furthermore, we describe several extensions that enable the Filtering of the various bands to be performed independently, based on the resolution, sequence content, complexity requirements and desired scalability.

  • UNCONSTRAINED MOTION COMPENSATED Temporal Filtering (UMCTF)
    2003
    Co-Authors: M Van Der Schaar, Deepak S Turaga
    Abstract:

    Ris'paper presents a new framework for adaptive remporal filrering in waveler inrerframe coders, called the unconstrained morion compensared Temporal Filtering (UMCTF). 7his framework allows flexible and eficienr remporal Filtering by combining rhe besr features of morion compensarion, used in predictive coding, with the advantages of inrerframe scalable waveler video coding schemes. UMCTF provides higher coding eficiency,.iniproved visual quality and flexibiliry of remporal and sparial scalabiliry, higher coding eficiency and lower decoding delay rhan conventional MCTF schemes. Funhermore, UMCTF can also be employed in alremarive open-loop scalable coding frameworks using DCT for rhe rexrure coding.

Peter Schelkens - One of the best experts on this subject based on the ideXlab platform.

  • unconstrained motion compensated Temporal Filtering umctf for efficient and flexible interframe wavelet video coding
    Signal Processing-image Communication, 2005
    Co-Authors: Deepak S Turaga, Yiannis Andreopoulos, Adrian Munteanu, M. Van Der Schaar, Peter Schelkens
    Abstract:

    We introduce an efficient and flexible framework for Temporal Filtering in wavelet-based scalable video codecs called unconstrained motion compensated Temporal Filtering (UMCTF). UMCTF allows for the use of different filters and Temporal decomposition structures through a set of controlling parameters that may be easily modified during the coding process, at different granularities and levels. The proposed framework enables the adaptation of the coding process to the video content, network and end-device characteristics, allows for enhanced scalability, content-adaptivity and reduced delay, while improving the coding efficiency as compared to state-of-the-art motion-compensated wavelet video coders. Additionally, a mechanism for the control of the distortion variation in video coding based on UMCTF employing only the predict step is proposed. The control mechanism is formulated by expressing the distortion in an arbitrary decoded frame, at any Temporal level in the pyramid, as a function of the distortions in the reference frames at the same Temporal level. All the different scenarios proposed in the paper are experimentally validated through a coding scheme that incorporates advanced features (such as rate-distortion optimized variable block-size multihypothesis prediction and overlapped block motion compensation). Experiments are carried out to determine the relative efficiency of different UMCTF instantiations, as well as to compare against the current state-of-the-art in video coding.

  • In-band motion compensated Temporal Filtering
    Signal Processing: Image Communication, 2004
    Co-Authors: Yiannis Andreopoulos, M Van Der Schaar, Adrian Munteanu, Joeri Barbarien, J Cornelis, Peter Schelkens
    Abstract:

    A novel framework for fully scalable video coding that performs open-loop motion-compensated Temporal Filtering (MCTF) in the wavelet domain (in-band) is presented in this paper. Unlike the conventional spatial-domain MCTF (SDMCTF) schemes, which apply MCTF on the original image data and then encode the residuals using the critically sampled discrete wavelet transform (DWT), the proposed framework applies the in-band MCTF (IBMCTF) after the DWT is performed in the spatial dimensions. To overcome the inefficiency of MCTF in the critically-sampled DWT, a complete-to-overcomplete DWT (CODWT) is performed. Recent theoretical findings on the CODWT are reviewed from the application perspective of fully-scalable IBMCTF, and constraints on the transform calculation that allow for fast and seamless resolution-scalable coding are established. Furthermore, inspired by recent work on advanced prediction techniques, an algorithm for optimized multihypothesis Temporal Filtering is proposed in this paper. The application of the proposed algorithm in MCTF-based video coding is demonstrated, and similar improvements as for the multihypothesis prediction algorithms employed in closed-loop video coding are experimentally observed. Experimental instantiations of the proposed IBMCTF and SDMCTF coders with multihypothesis prediction produce single embedded bitstreams, from which subsets are extracted to be compared against the current state-of-the-art in video coding.

  • fully scalable wavelet video coding using in band motion compensated Temporal Filtering
    International Conference on Acoustics Speech and Signal Processing, 2003
    Co-Authors: Yiannis Andreopoulos, M Van Der Schaar, Adrian Munteanu, Joeri Barbarien, Peter Schelkens, J Cornelis
    Abstract:

    This paper presents a novel fully-scalable wavelet video coding scheme that performs efficient open-loop motion compensated Temporal Filtering (MCTF) in the wavelet domain (in-band). Unlike the conventional spatial-domain MCTF (SDMCTF) schemes, which apply MCTF on the original image data and then encode the residual image using a critically-sampled wavelet transform, the framework presented here applies the in-band MCTF (IBMCTF) after the discrete wavelet transform (DWT) is performed in the spatial dimensions. To overcome the inefficiency of motion estimation (ME) in the wavelet domain, a complete-to-overcomplete DWT (CODWT) is performed. The proposed framework provides improved quality (SNR) and Temporal scalability as compared with existing in-band closed-loop Temporal prediction schemes with ODWT and improved spatial scalability as compared to SDMCTF. We present a thorough comparison between SDMCTF and the proposed IBMCTF in terms of coding efficiency and scalability. Furthermore, we describe several extensions that enable the Filtering of the various bands to be performed independently, based on the resolution, sequence content, complexity requirements and desired scalability.

  • ICIP (2) - Motion vector coding for in-band motion compensated Temporal Filtering
    Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429), 1
    Co-Authors: Joeri Barbarien, Yiannis Andreopoulos, Adrian Munteanu, Peter Schelkens, Jan G. Cornelis
    Abstract:

    Recently, a new wavelet-based video codec using in-band motion compensated Temporal Filtering (IBMCTF) was introduced This codec is fully scalable in resolution, quality aid frame-rate. In comparison to an equivalent video coding scheme based on spatial domain motion compensated Temporal Filtering (SDMCTF), its compression performance when decoding to lower resolutions is very promising. However, since the IBMCTF scheme is based on in-band motion estimation, considerably more motion vector data is generated than in the SDMCTF scheme. Efficient compression of these motion vectors is therefore of utmost importance. In this paper, several solutions for the compression of motion vectors generated by a video codec based on IBMCTF are presented and compared.

Yiannis Andreopoulos - One of the best experts on this subject based on the ideXlab platform.

  • unconstrained motion compensated Temporal Filtering umctf for efficient and flexible interframe wavelet video coding
    Signal Processing-image Communication, 2005
    Co-Authors: Deepak S Turaga, Yiannis Andreopoulos, Adrian Munteanu, M. Van Der Schaar, Peter Schelkens
    Abstract:

    We introduce an efficient and flexible framework for Temporal Filtering in wavelet-based scalable video codecs called unconstrained motion compensated Temporal Filtering (UMCTF). UMCTF allows for the use of different filters and Temporal decomposition structures through a set of controlling parameters that may be easily modified during the coding process, at different granularities and levels. The proposed framework enables the adaptation of the coding process to the video content, network and end-device characteristics, allows for enhanced scalability, content-adaptivity and reduced delay, while improving the coding efficiency as compared to state-of-the-art motion-compensated wavelet video coders. Additionally, a mechanism for the control of the distortion variation in video coding based on UMCTF employing only the predict step is proposed. The control mechanism is formulated by expressing the distortion in an arbitrary decoded frame, at any Temporal level in the pyramid, as a function of the distortions in the reference frames at the same Temporal level. All the different scenarios proposed in the paper are experimentally validated through a coding scheme that incorporates advanced features (such as rate-distortion optimized variable block-size multihypothesis prediction and overlapped block motion compensation). Experiments are carried out to determine the relative efficiency of different UMCTF instantiations, as well as to compare against the current state-of-the-art in video coding.

  • In-band motion compensated Temporal Filtering
    Signal Processing: Image Communication, 2004
    Co-Authors: Yiannis Andreopoulos, M Van Der Schaar, Adrian Munteanu, Joeri Barbarien, J Cornelis, Peter Schelkens
    Abstract:

    A novel framework for fully scalable video coding that performs open-loop motion-compensated Temporal Filtering (MCTF) in the wavelet domain (in-band) is presented in this paper. Unlike the conventional spatial-domain MCTF (SDMCTF) schemes, which apply MCTF on the original image data and then encode the residuals using the critically sampled discrete wavelet transform (DWT), the proposed framework applies the in-band MCTF (IBMCTF) after the DWT is performed in the spatial dimensions. To overcome the inefficiency of MCTF in the critically-sampled DWT, a complete-to-overcomplete DWT (CODWT) is performed. Recent theoretical findings on the CODWT are reviewed from the application perspective of fully-scalable IBMCTF, and constraints on the transform calculation that allow for fast and seamless resolution-scalable coding are established. Furthermore, inspired by recent work on advanced prediction techniques, an algorithm for optimized multihypothesis Temporal Filtering is proposed in this paper. The application of the proposed algorithm in MCTF-based video coding is demonstrated, and similar improvements as for the multihypothesis prediction algorithms employed in closed-loop video coding are experimentally observed. Experimental instantiations of the proposed IBMCTF and SDMCTF coders with multihypothesis prediction produce single embedded bitstreams, from which subsets are extracted to be compared against the current state-of-the-art in video coding.

  • fully scalable wavelet video coding using in band motion compensated Temporal Filtering
    International Conference on Acoustics Speech and Signal Processing, 2003
    Co-Authors: Yiannis Andreopoulos, M Van Der Schaar, Adrian Munteanu, Joeri Barbarien, Peter Schelkens, J Cornelis
    Abstract:

    This paper presents a novel fully-scalable wavelet video coding scheme that performs efficient open-loop motion compensated Temporal Filtering (MCTF) in the wavelet domain (in-band). Unlike the conventional spatial-domain MCTF (SDMCTF) schemes, which apply MCTF on the original image data and then encode the residual image using a critically-sampled wavelet transform, the framework presented here applies the in-band MCTF (IBMCTF) after the discrete wavelet transform (DWT) is performed in the spatial dimensions. To overcome the inefficiency of motion estimation (ME) in the wavelet domain, a complete-to-overcomplete DWT (CODWT) is performed. The proposed framework provides improved quality (SNR) and Temporal scalability as compared with existing in-band closed-loop Temporal prediction schemes with ODWT and improved spatial scalability as compared to SDMCTF. We present a thorough comparison between SDMCTF and the proposed IBMCTF in terms of coding efficiency and scalability. Furthermore, we describe several extensions that enable the Filtering of the various bands to be performed independently, based on the resolution, sequence content, complexity requirements and desired scalability.

  • ICIP (2) - Motion vector coding for in-band motion compensated Temporal Filtering
    Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429), 1
    Co-Authors: Joeri Barbarien, Yiannis Andreopoulos, Adrian Munteanu, Peter Schelkens, Jan G. Cornelis
    Abstract:

    Recently, a new wavelet-based video codec using in-band motion compensated Temporal Filtering (IBMCTF) was introduced This codec is fully scalable in resolution, quality aid frame-rate. In comparison to an equivalent video coding scheme based on spatial domain motion compensated Temporal Filtering (SDMCTF), its compression performance when decoding to lower resolutions is very promising. However, since the IBMCTF scheme is based on in-band motion estimation, considerably more motion vector data is generated than in the SDMCTF scheme. Efficient compression of these motion vectors is therefore of utmost importance. In this paper, several solutions for the compression of motion vectors generated by a video codec based on IBMCTF are presented and compared.

Chulhee Lee - One of the best experts on this subject based on the ideXlab platform.

  • High quality spatially registered vertical Temporal Filtering for deinterlacing
    IEEE Transactions on Consumer Electronics, 2013
    Co-Authors: Kwon Lee, Chulhee Lee
    Abstract:

    In this paper, we propose a high quality deinterlacing method using vertical Temporal Filtering with spatial registration. Vertical Temporal Filtering methods generally perform well with low levels of complexity. However, they may also produce visible artifacts in moving scenes since incorrect blocks can be used from adjacent fields. By applying global spatial registration before performing vertical Temporal Filtering, we reduced these deinterlacing errors. To compute the global motion vector, we used a small number of pixels to reduce the computational complexity. However, this global spatial registration sometimes produced artifacts in stationary areas. To solve this problem, we selectively used the conventional vertical Temporal Filtering method and the spatially registered vertical Temporal Filtering method. We conducted experiments using CIF and HD progressive video sequences, some of which contained fast motion scenes. Experimental results show that the proposed method noticeably improved video quality in terms of subjective and objective evaluations. The proposed method showed 2-7 dB improvement in terms of PSNR compared to existing methods in fast moving video sequences.

  • ICCE - High performance deinterlacing using spatially registered vertical-Temporal Filtering
    2012 IEEE International Conference on Consumer Electronics (ICCE), 2012
    Co-Authors: Kwon Lee, Chulhee Lee
    Abstract:

    In this paper, we propose a high performance deinterlacing method using vertical-Temporal Filtering with spatial registration. The vertical-Temporal Filtering method performs compares favorably with existing deinterlacing methods with low computational complexity. However, it produces undesired artifacts in motion scenes since it assumes constant frame differences. By applying vertical-Temporal Filtering after spatial registeration, we can reduce deinterlacing artifacts. To reduce computational complexity for the spatial registration, we use sub-sampled pixels. Experimental results show that the proposed method improves video quality in terms of subjective and objective evaluations.

  • High quality deinterlacing using content adaptive vertical Temporal Filtering
    IEEE Transactions on Consumer Electronics, 2010
    Co-Authors: Kwon Lee, Chulhee Lee
    Abstract:

    In this paper, we propose a content adaptive vertical Temporal Filtering method for deinterlacing which effectively uses correlations between adjacent frames. The proposed method consists of two steps: the initial interpolation step and the enhancement step. During the initial interpolation step, missing lines were reconstructed by applying a modified content adaptive vertical Temporal Filtering (MCAVTF) method. The MCAVTF method is proposed to improve the classification accuracy of the video content. After the initial interpolation step, the reconstructed lines were refined by using an adaptive weighted vertical Temporal Filtering (AWVTF) method. A field averaging filter was also used to enhance stationary local regions. Since we did not use motion estimation, the proposed method was of low complexity. Experimental results show that the proposed method outperforms existing methods in terms of subjective and objective evaluations.

  • Deinterlacing with motion adaptive vertical Temporal Filtering
    IEEE Transactions on Consumer Electronics, 2009
    Co-Authors: Kwon Lee, Jonghwa Lee, Chulhee Lee
    Abstract:

    In this paper, we propose a deinterlacing method with motion adaptive vertical Temporal Filtering, which utilizes the correlations between adjacent frames. In this model, we first interpolate the missing lines of the current frame and adjacent frames by using an intrafield deinterlacing method. Then we compute the pixel differences between the current frame and the adjacent frames. Since the differences between the adjacent frames would show similar patterns, we can use these patterns to improve deinterlacing performance. In other words, instead of performing deinterlacing in the frame domain, we perform the operation in the frame difference domain. Since the proposed method produces good performance in stationary regions, we selectively apply the vertical Temporal filter. Then we apply the proposed method iteratively in order to enhance video quality. The proposed method shows low complexity and still produces superior performance. Experimental results show that the proposed method provides noticeable improvements over existing methods in terms of both subjective and objective evaluations.