Key Frame

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 91302 Experts worldwide ranked by ideXlab platform

Riccardo Scopigno - One of the best experts on this subject based on the ideXlab platform.

  • Browsing and exploration of video sequences: A new scheme for Key Frame extraction and 3D visualization using entropy based Jensen divergence
    Information Sciences, 2014
    Co-Authors: Yu Liu, Zhen Yang, Jie Wang, Mateu Sbert, Riccardo Scopigno
    Abstract:

    Abstract This paper proposes a unified scheme for video browsing and exploration. Our scheme involves two components, video Key Frame extraction and 3D visualization. For Key Frame extraction, we develop a generic approach in which the Jensen–Shannon divergence (JSD), Jensen–Renyi divergence (JRD) and Jensen–Tsallis divergence (JTD) are investigated for measuring the difference between neighboring video Frames, segmenting a video clip into shots then possibly into sub-shots, and choosing Key Frames in each shot. Our novel approach is computationally inexpensive and yet effective, as shown by experimental results. As for 3D visualization, an innovative prototype, in which Key Frames, their related information and video contexts are displayed, is created for video exploration. Our visualization tool also contributes to easily locating, highlighting and removing the possible redundant Key Frames. The Key Frames selected by our proposed approach and their corresponding visualization interface are combined to lead to a fast grasp of video contents.

  • Selection and 3D visualization of video Key Frames
    2010 IEEE International Conference on Systems Man and Cybernetics, 2010
    Co-Authors: Qing Xu, Mateu Sbert, Pengcheng Wang, Bin Long, Miquel Feixas, Riccardo Scopigno
    Abstract:

    The Key Frame selection is aimed to extract a set of typical Frames that represent the visual content of a video sequence; the graphical presentation of Key Frames helps understand the video content. This paper proposes a generic method for extracting Key Frames in which the Jensen-Shannon divergence (JSD) is employed to measure the difference between neighboring video Frames to segment a video clip into shots, and to choose Key Frames in each shot. The novel method for Key Frame extraction is computationally inexpensive and yet effective, as shown by experimental results. In addition, an innovative 3D visualization approach, in which video Key Frames and the useful information related to the process of Key Frame selection are displayed in different levels of detail, is presented as an exploration tool to contribute to a quick and clear understanding of the video content.

Hong-jiang Zhang - One of the best experts on this subject based on the ideXlab platform.

  • a novel video Key Frame extraction algorithm based on perceived motion energy model
    IEEE Transactions on Circuits and Systems for Video Technology, 2003
    Co-Authors: Tianming Liu, Hong-jiang Zhang
    Abstract:

    The Key Frame is a simple yet effective form of summarizing a long video sequence. The number of Key Frames used to abstract a shot should be compliant to visual content complexity within the shot and the placement of Key Frames should represent most salient visual content. Motion is the more salient feature in presenting actions or events in video and, thus, should be the feature to determine Key Frames. We propose a triangle model of perceived motion energy (PME) to model motion patterns in video and a scheme to extract Key Frames based on this model. The Frames at the turning point of the motion acceleration and motion deceleration are selected as Key Frames. The Key-Frame selection process is threshold free and fast and the extracted Key Frames are representative.

  • content based video retrieval and compression a unified solution
    International Conference on Image Processing, 1997
    Co-Authors: Hong-jiang Zhang, J Y A Wang, Y Altunbasak
    Abstract:

    Video compression and retrieval have been treated as separate problems in the past. We present an object-based video representation that facilitates both compression and retrieval. Typically in retrieval applications, a video sequence is subdivided in time into a set of shorter segments each of which contains similar content. These segments are represented by 2-D representative images called "Key-Frames" that greatly reduce amount of data that is searched. However, Key-Frames do not describe the motions and actions of objects within the segment. We propose a representation that extends the ideas of the Key-Frame to further include what we define as "Key-objects". These Key-objects consist of regions within a Key-Frame that move with similar motion. Thus our Key-objects allow a retrieval system to more efficiently present information to users and assist them in browsing and retrieving relevant video content.

  • an integrated system for content based video retrieval and browsing
    Pattern Recognition, 1997
    Co-Authors: Hong-jiang Zhang, Jian Hua Wu, Di Zhong, Stephen W. Smoliar
    Abstract:

    This paper presents an integrated system solution for computer assisted video parsing and content-based video retrieval and browsing. The effectiveness of this solution lies in its use of video content information derived from a parsing process, being driven by visual feature analysis. That is, parsing will temporally segment and abstract a video source, based on low-level image analyses; then retrieval and browsing of video will be based on Key-Frame, temporal and motion features of shots. These processes and a set of tools to facilitate content-based video retrieval and browsing using the feature data set are presented in detail as functions of an integrated system.

Guoliang Fan - One of the best experts on this subject based on the ideXlab platform.

  • joint Key Frame extraction and object segmentation for content based video analysis
    IEEE Transactions on Circuits and Systems for Video Technology, 2006
    Co-Authors: Xiaomu Song, Guoliang Fan
    Abstract:

    Key-Frame extraction and object segmentation are usually implemented independently and separately due to the fact that they are on different semantic levels and involve different features. In this work, we propose a joint Key-Frame extraction and object segmentation method by constructing a unified feature space for both processes, where Key-Frame extraction is formulated as a feature selection process for object segmentation in the context of Gaussian mixture model (GMM)-based video modeling. Specifically, two divergence-based criteria are introduced for Key-Frame extraction. One recommends Key-Frame extraction that leads to the maximum pairwise interclass divergence between GMM components. The other aims at maximizing the marginal divergence that shows the intra-Frame variation of the mean density. The proposed methods can extract representative Key-Frames for object segmentation, and some interesting characteristics of Key-Frames are also discussed. This work provides a unique paradigm for content-based video analysis

  • combined Key Frame extraction and object based video segmentation
    IEEE Transactions on Circuits and Systems for Video Technology, 2005
    Co-Authors: Lijie Liu, Guoliang Fan
    Abstract:

    Video segmentation has been an important and challenging issue for many video applications. Usually there are two different video segmentation approaches, i.e., shot-based segmentation that uses a set of Key-Frames to represent a video shot and object-based segmentation that partitions a video shot into objects and background. Representing a video shot at different semantic levels, two segmentation processes are usually implemented separately or independently for video analysis. In this paper, we propose a new approach to combine two video segmentation techniques together. Specifically, a combined Key-Frame extraction and object-based segmentation method is developed based state-of-the-art video segmentation algorithms and statistical clustering approaches. On the one hand, shot-based segmentation can dramatically facilitate and enhance object-based segmentation by using Key-Frame extraction to select a few Key-Frames for statistical model training. On the other hand, object-based segmentation can be used to improve shot-based segmentation results by using model-based Key-Frame refinement. The proposed approach is able to integrate advantages of these two segmentation methods and provide a new combined shot-based and object-based Framework for a variety of advanced video analysis tasks. Experimental results validate effectiveness and flexibility of the proposed video segmentation algorithm.

Mateu Sbert - One of the best experts on this subject based on the ideXlab platform.

  • Browsing and exploration of video sequences: A new scheme for Key Frame extraction and 3D visualization using entropy based Jensen divergence
    Information Sciences, 2014
    Co-Authors: Yu Liu, Zhen Yang, Jie Wang, Mateu Sbert, Riccardo Scopigno
    Abstract:

    Abstract This paper proposes a unified scheme for video browsing and exploration. Our scheme involves two components, video Key Frame extraction and 3D visualization. For Key Frame extraction, we develop a generic approach in which the Jensen–Shannon divergence (JSD), Jensen–Renyi divergence (JRD) and Jensen–Tsallis divergence (JTD) are investigated for measuring the difference between neighboring video Frames, segmenting a video clip into shots then possibly into sub-shots, and choosing Key Frames in each shot. Our novel approach is computationally inexpensive and yet effective, as shown by experimental results. As for 3D visualization, an innovative prototype, in which Key Frames, their related information and video contexts are displayed, is created for video exploration. Our visualization tool also contributes to easily locating, highlighting and removing the possible redundant Key Frames. The Key Frames selected by our proposed approach and their corresponding visualization interface are combined to lead to a fast grasp of video contents.

  • Selection and 3D visualization of video Key Frames
    2010 IEEE International Conference on Systems Man and Cybernetics, 2010
    Co-Authors: Qing Xu, Mateu Sbert, Pengcheng Wang, Bin Long, Miquel Feixas, Riccardo Scopigno
    Abstract:

    The Key Frame selection is aimed to extract a set of typical Frames that represent the visual content of a video sequence; the graphical presentation of Key Frames helps understand the video content. This paper proposes a generic method for extracting Key Frames in which the Jensen-Shannon divergence (JSD) is employed to measure the difference between neighboring video Frames to segment a video clip into shots, and to choose Key Frames in each shot. The novel method for Key Frame extraction is computationally inexpensive and yet effective, as shown by experimental results. In addition, an innovative 3D visualization approach, in which video Key Frames and the useful information related to the process of Key Frame selection are displayed in different levels of detail, is presented as an exploration tool to contribute to a quick and clear understanding of the video content.

Sudeep D Thepade - One of the best experts on this subject based on the ideXlab platform.

  • summarization with Key Frame extraction using thepade s sorted n ary block truncation coding applied on haar wavelet of video Frame
    2016 Conference on Advances in Signal Processing (CASP), 2016
    Co-Authors: Shalakha R Badre, Sudeep D Thepade
    Abstract:

    Due to advance growth in videos available across the internet, it is required to navigate and handle them properly. It is essential to select only valuable and accurate information from video. Video summarization helps in acquiring essential information. Video summary produces concise and exact data of the video. With help of Key Frame extraction video summary can be generated. Key Frames from video represent main content of video. In the proposed methodology, to extract Key Frame from video, haar wavelet transform with various levels and Thepade's sorted pentnary block truncation coding is used. For experimentation purpose test bed of 30 videos is used here. To measure the diversity among successive Frames various similarity measures are used. Alias Canberra distance, Sorencen distance, Wavehedge distance, Euclidean distance and mean square error similarity measures are used. The Euclidean distance has given better performance. The increase in accuracy is observed till Haar wavelet of level 5, then higher levels have shown drop in accuracy.

  • novel video content summarization using thepade s sorted n ary block truncation coding
    Procedia Computer Science, 2016
    Co-Authors: Shalakha R Badre, Sudeep D Thepade
    Abstract:

    Abstract With the quick growth of multimedia technology, a huge amount of videos is available across the world. Length of these videos may be large; user may require only viewing the summary of video. In such cases, video content summarization is used. A video comprise of no. of Frames. Video content summarization gives the concise form of the video. Key Frame represent the main content of video. For video content summarization Key Frame extraction is mainly considered. This paper proposes novel method to extract Key Frames from video using Thepade's sorted n-ary block truncation coding. Here total five variations of TSBTC's has done. Out of these, Thepade's sorted pentnary (TSPBTC), performs better in all similarity measures. Canberra distance give's effective performance in all TSBTC's & L1 distance family delivers highest performance as compared to other family.

  • novel visual content summarization in videos using KeyFrame extraction with thepade s sorted ternary block truncation coding and assorted similarity measures
    International Conference on Communication Information & Computing Technology, 2015
    Co-Authors: Sudeep D Thepade, Pritam H Patil
    Abstract:

    A video is made up of Frames. Generally few video processing applications demand to process each video Frame one by one, but processing each Frame consumes lot of video content summarization helps in improvising the processing speed for such applications. Key Frames in video are considered for content summarization. Key Frame is a Frame in which there is a major change as compared to the previous video Frames. Hence Key Frame extraction becomes very important in Video Content Summarization. In applications needing content summarization, like data storage, retrieval and surveillance, Key Frames extraction plays a vital role. The Block truncation Coding is one of color feature extraction methods in Content Based Video Retrieval (CBVR). The extended version of BTC is Thepade's Sorted Ternary BTC (TSTBTC). This paper explains Key Frame extraction using Assorted similarity measure and TSTBTC.

  • Novel KeyFrame Extraction for Video Content Summarization using LBG Codebook Generation Technique of Vector Quantization
    International Journal of Computer Applications, 2015
    Co-Authors: Sudeep D Thepade, Pritam H Patil
    Abstract:

    current era, most of the digital information in the form of multimedia with a giant share of videos. Videos do have audio and visual content where the visual content has number of Frames put in a sequence. Most of the consecutive Frames do have very little discriminative contents. In video summarization process, several Frames containing similar information are needed to get processed. This leads to redundant slow processing speed and complexity, time consumption. Video summarization using Key Frames can ease the speedup of video processing. In this paper, novel Key Frame extraction method is proposed with Linde-Buzo-Gray (LBG) codebook generation techniques of vector quantization with ten different codebook sizes. Experimentation done with the help of the test bed of videos has shown that higher codebook sizes of LBG have given better completeness in Key Frame extraction for video summarization. Experimental results are also discussed to represent the validity of the proposed method for video content summarization.

  • novel method for KeyFrame extraction using block truncation coding and mean square error
    International Conference on Green Computing Communication and Electrical Engineering, 2014
    Co-Authors: Sudeep D Thepade, Nikhil Bankar, Akansha Raina, Shreyas Deshpande, Abhijeet Kulkarni
    Abstract:

    A video signal is made of up ‘n’ number of Frames. Frames per second gives the number of Frames per second, hence 32 fps means there are 32 Frames in one second. In video processing we have to process each Frame one by one, but processing each Frame would consume a lot of resources and time. Key Frame is a Frame in which there is a major change as compared to the previous Frame. Hence instead of processing all the Frames we only process the Key Frames. In applications like video content summarization, data storage and surveillance Key Frames extraction plays a vital role. This paper explains Key Frame extraction by MSE, sobel edge detection and MSE, BTC, DTTBTC and TSTBTC. Results of the methods are compared and discussed.