Linear Model

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1526313 Experts worldwide ranked by ideXlab platform

Marta Karczewicz - One of the best experts on this subject based on the ideXlab platform.

  • enhanced cross component Linear Model for chroma intra prediction in video coding
    IEEE Transactions on Image Processing, 2018
    Co-Authors: Kai Zhang, Jianle Chen, Li Zhang, Xiang Li, Marta Karczewicz
    Abstract:

    Cross-component Linear Model (CCLM) for chroma intra-prediction is a promising coding tool in the joint exploration Model (JEM) developed by the Joint Video Exploration Team (JVET). CCLM assumes a Linear correlation between the luma and chroma components in a coding block. With this assumption, the chroma components can be predicted by the Linear Model (LM) mode, which utilizes the reconstructed neighboring samples to derive parameters of a Linear Model by Linear regression. This paper presents three new methods to further improve the coding efficiency of CCLM. First, we introduce a multi-Model CCLM (MM-CCLM) approach, which applies more than one Linear Model to a coding block. With MM-CCLM, reconstructed neighboring luma and chroma samples of the current block are classified into several groups, and a particular set of Linear Model parameters is derived for each group. The reconstructed luma samples of the current block are also classified to predict the associated chroma samples with the corresponding Linear Model. Second, we propose a multi-filter CCLM (MF-CCLM) technique, which allows the encoder to select the optimal down-sampling filter for the luma component with the 4:2:0 color format. Third, we present an LM-angular prediction method, which synthesizes the angular intra-prediction and the MM-CCLM intra-prediction into a new chroma intra-coding mode. Simulation results show that the BD-rate savings of 0.55%, 4.66%, and 5.08% on average for Y, Cb, and Cr components, respectively, are achieved in all intra-configurations with the proposed three methods. MM-CCLM and MF-CCLM have been adopted into the JEM by JVET.

  • multi Model based cross component Linear Model chroma intra prediction for video coding
    Visual Communications and Image Processing, 2017
    Co-Authors: Kai Zhang, Jianle Chen, Li Zhang, Marta Karczewicz
    Abstract:

    Cross-component Linear Model (CCLM) chroma intra prediction assumes a Linear correlation between the luma and chroma components in a coding block. With this assumption, the chroma components can be predicted by LM mode, which utilizes the reconstructed neighbouring samples to derive parameters of the Linear Model by Linear regression. This paper presents a multi-Model CCLM (MM-CCLM) approach, which applies more than one Linear Models in a coding block. With MM-CCLM, reconstructed neighbouring luma and chroma samples of the current block are classified into several groups and each group is used as a training set to derive its own Linear Model. The reconstructed luma samples of the current block are also classified to use corresponding Linear Model to predict the associated chroma samples. Simulation results show that 0.26%, 1.89% and 1.96% BD rate savings on Y, Cb and Cr components are achieved for All Intra (AI) configurations in average. The proposed method has been adopted in the Joint Exploration Model (JEM) by Joint Video Exploration Team (JVET).

Kai Zhang - One of the best experts on this subject based on the ideXlab platform.

  • enhanced cross component Linear Model for chroma intra prediction in video coding
    IEEE Transactions on Image Processing, 2018
    Co-Authors: Kai Zhang, Jianle Chen, Li Zhang, Xiang Li, Marta Karczewicz
    Abstract:

    Cross-component Linear Model (CCLM) for chroma intra-prediction is a promising coding tool in the joint exploration Model (JEM) developed by the Joint Video Exploration Team (JVET). CCLM assumes a Linear correlation between the luma and chroma components in a coding block. With this assumption, the chroma components can be predicted by the Linear Model (LM) mode, which utilizes the reconstructed neighboring samples to derive parameters of a Linear Model by Linear regression. This paper presents three new methods to further improve the coding efficiency of CCLM. First, we introduce a multi-Model CCLM (MM-CCLM) approach, which applies more than one Linear Model to a coding block. With MM-CCLM, reconstructed neighboring luma and chroma samples of the current block are classified into several groups, and a particular set of Linear Model parameters is derived for each group. The reconstructed luma samples of the current block are also classified to predict the associated chroma samples with the corresponding Linear Model. Second, we propose a multi-filter CCLM (MF-CCLM) technique, which allows the encoder to select the optimal down-sampling filter for the luma component with the 4:2:0 color format. Third, we present an LM-angular prediction method, which synthesizes the angular intra-prediction and the MM-CCLM intra-prediction into a new chroma intra-coding mode. Simulation results show that the BD-rate savings of 0.55%, 4.66%, and 5.08% on average for Y, Cb, and Cr components, respectively, are achieved in all intra-configurations with the proposed three methods. MM-CCLM and MF-CCLM have been adopted into the JEM by JVET.

  • multi Model based cross component Linear Model chroma intra prediction for video coding
    Visual Communications and Image Processing, 2017
    Co-Authors: Kai Zhang, Jianle Chen, Li Zhang, Marta Karczewicz
    Abstract:

    Cross-component Linear Model (CCLM) chroma intra prediction assumes a Linear correlation between the luma and chroma components in a coding block. With this assumption, the chroma components can be predicted by LM mode, which utilizes the reconstructed neighbouring samples to derive parameters of the Linear Model by Linear regression. This paper presents a multi-Model CCLM (MM-CCLM) approach, which applies more than one Linear Models in a coding block. With MM-CCLM, reconstructed neighbouring luma and chroma samples of the current block are classified into several groups and each group is used as a training set to derive its own Linear Model. The reconstructed luma samples of the current block are also classified to use corresponding Linear Model to predict the associated chroma samples. Simulation results show that 0.26%, 1.89% and 1.96% BD rate savings on Y, Cb and Cr components are achieved for All Intra (AI) configurations in average. The proposed method has been adopted in the Joint Exploration Model (JEM) by Joint Video Exploration Team (JVET).

Cheng Wang - One of the best experts on this subject based on the ideXlab platform.

Bahari Idrus - One of the best experts on this subject based on the ideXlab platform.

  • a Linear Model based on kalman filter for improving neural network classification performance
    Expert Systems With Applications, 2016
    Co-Authors: Joko Siswantoro, Anton Satria Prabuwono, Azizi Abdullah, Bahari Idrus
    Abstract:

    This paper proposes a method to improve neural network classification performance.A Linear Model was used as post processing of neural network.The parameters of Linear Model was estimated using Kalman filter iteration.The method can be applied to classify an object regardless of the type of feature.The method has been validated with five different datasets. Neural network has been applied in several classification problems such as in medical diagnosis, handwriting recognition, and product inspection, with a good classification performance. The performance of a neural network is characterized by the neural network's structure, transfer function, and learning algorithm. However, a neural network classifier tends to be weak if it uses an inappropriate structure. The neural network's structure depends on the complexity of the relationship between the input and the output. There are no exact rules that can be used to determine the neural network's structure. Therefore, studies in improving neural network classification performance without changing the neural network's structure is a challenging issue. This paper proposes a method to improve neural network classification performance by constructing a Linear Model based on the Kalman filter as a post processing. The Linear Model transforms the predicted output of the neural network to a value close to the desired output by using the Linear combination of the object features and the predicted output. This simple transformation will reduce the error of neural network and improve classification performance. The Kalman filter iteration is used to estimate the parameters of the Linear Model. Five datasets from various domains with various characteristics, such as attribute types, the number of attributes, the number of samples, and the number of classes, were used for empirical validation. The validation results show that the Linear Model based on the Kalman filter can improve the performance of the original neural network.

Jianle Chen - One of the best experts on this subject based on the ideXlab platform.

  • enhanced cross component Linear Model for chroma intra prediction in video coding
    IEEE Transactions on Image Processing, 2018
    Co-Authors: Kai Zhang, Jianle Chen, Li Zhang, Xiang Li, Marta Karczewicz
    Abstract:

    Cross-component Linear Model (CCLM) for chroma intra-prediction is a promising coding tool in the joint exploration Model (JEM) developed by the Joint Video Exploration Team (JVET). CCLM assumes a Linear correlation between the luma and chroma components in a coding block. With this assumption, the chroma components can be predicted by the Linear Model (LM) mode, which utilizes the reconstructed neighboring samples to derive parameters of a Linear Model by Linear regression. This paper presents three new methods to further improve the coding efficiency of CCLM. First, we introduce a multi-Model CCLM (MM-CCLM) approach, which applies more than one Linear Model to a coding block. With MM-CCLM, reconstructed neighboring luma and chroma samples of the current block are classified into several groups, and a particular set of Linear Model parameters is derived for each group. The reconstructed luma samples of the current block are also classified to predict the associated chroma samples with the corresponding Linear Model. Second, we propose a multi-filter CCLM (MF-CCLM) technique, which allows the encoder to select the optimal down-sampling filter for the luma component with the 4:2:0 color format. Third, we present an LM-angular prediction method, which synthesizes the angular intra-prediction and the MM-CCLM intra-prediction into a new chroma intra-coding mode. Simulation results show that the BD-rate savings of 0.55%, 4.66%, and 5.08% on average for Y, Cb, and Cr components, respectively, are achieved in all intra-configurations with the proposed three methods. MM-CCLM and MF-CCLM have been adopted into the JEM by JVET.

  • multi Model based cross component Linear Model chroma intra prediction for video coding
    Visual Communications and Image Processing, 2017
    Co-Authors: Kai Zhang, Jianle Chen, Li Zhang, Marta Karczewicz
    Abstract:

    Cross-component Linear Model (CCLM) chroma intra prediction assumes a Linear correlation between the luma and chroma components in a coding block. With this assumption, the chroma components can be predicted by LM mode, which utilizes the reconstructed neighbouring samples to derive parameters of the Linear Model by Linear regression. This paper presents a multi-Model CCLM (MM-CCLM) approach, which applies more than one Linear Models in a coding block. With MM-CCLM, reconstructed neighbouring luma and chroma samples of the current block are classified into several groups and each group is used as a training set to derive its own Linear Model. The reconstructed luma samples of the current block are also classified to use corresponding Linear Model to predict the associated chroma samples. Simulation results show that 0.26%, 1.89% and 1.96% BD rate savings on Y, Cb and Cr components are achieved for All Intra (AI) configurations in average. The proposed method has been adopted in the Joint Exploration Model (JEM) by Joint Video Exploration Team (JVET).