Broadcast Video

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 18237 Experts worldwide ranked by ideXlab platform

Qingming Huang - One of the best experts on this subject based on the ideXlab platform.

  • event tactic analysis based on Broadcast sports Video
    IEEE Transactions on Multimedia, 2009
    Co-Authors: Guangyu Zhu, Yong Rui, Qingming Huang, Shuqiang Jiang, Wen Gao, Hongxun Yao
    Abstract:

    Most existing approaches on sports Video analysis have concentrated on semantic event detection. Sports professionals, however, are more interested in tactic analysis to help improve their performance. In this paper, we propose a novel approach to extract tactic information from the attack events in Broadcast soccer Video and present the events in a tactic mode to the coaches and sports professionals. We extract the attack events with far-view shots using the analysis and alignment of web-casting text and Broadcast Video. For a detected event, two tactic representations, aggregate trajectory and play region sequence, are constructed based on multi-object trajectories and field locations in the event shots. Based on the multi-object trajectories tracked in the shot, a weighted graph is constructed via the analysis of temporal-spatial interaction among the players and the ball. Using the Viterbi algorithm, the aggregate trajectory is computed based on the weighted graph. The play region sequence is obtained using the identification of the active field locations in the event based on line detection and competition network. The interactive relationship of aggregate trajectory with the information of play region and the hypothesis testing for trajectory temporal-spatial distribution are employed to discover the tactic patterns in a hierarchical coarse-to-fine framework. Extensive experiments on FIFA World Cup 2006 show that the proposed approach is highly effective.

  • Using webcast text for semantic event detection in Broadcast sports Video
    IEEE Transactions on Multimedia, 2008
    Co-Authors: Changsheng Xu, Yong Rui, Hanqing Lu, Yi-fan Zhang, Guangyu Zhu, Qingming Huang
    Abstract:

    Sports Video semantic event detection is essential for sports Video summarization and retrieval. Extensive research efforts have been devoted to this area in recent years. However, the existing sports Video event detection approaches heavily rely on either Video content itself, which face the difficulty of high-level semantic information extraction from Video content using computer vision and image processing techniques, or manually generated Video ontology, which is domain specific and difficult to be automatically aligned with the Video content. In this paper, we present a novel approach for sports Video semantic event detection based on analysis and alignment of Webcast text and Broadcast Video. Webcast text is a text Broadcast channel for sports game which is co-produced with the Broadcast Video and is easily obtained from the Web. We first analyze Webcast text to cluster and detect text events in an unsupervised way using probabilistic latent semantic analysis (pLSA). Based on the detected text event and Video structure analysis, we employ a conditional random field model (CRFM) to align text event and Video event by detecting event moment and event boundary in the Video. Incorporation of Webcast text into sports Video analysis significantly facilitates sports Video semantic event detection. We conducted experiments on 33 hours of soccer and basketball games for Webcast analysis, Broadcast Video analysis and text/Video semantic alignment. The results are encouraging and compared with the manually labeled ground truth.

  • ICME - Lower attentive region detection for virtual content insertion in Broadcast Video
    2008 IEEE International Conference on Multimedia and Expo, 2008
    Co-Authors: Huiying Liu, Shuqiang Jiang, Qingming Huang
    Abstract:

    Virtual Content Insertion (VCI) is an emerging application of Video analysis. For VCI the spatial position is very important as improper placement will make the insertion intrusive. To choose the spatial position, we propose the notation of Lower Attentive Region (LAR) and provide a generic framework of LAR detection for Broadcast Video. An LAR is defined, from the cognition point of view, as a region of the Video frame which attracts less audiencepsilas attention. It can be changed with little interruption to the main content of the original Video. The proposed LAR detection framework includes both bottom-up and top-down modules and can be adapted to all types of Videos. Finally we apply the proposed LAR detection approach to Broadcast sports Video by integrating domain knowledge. The Experiments on LAR detection and VCI in Broadcast Video demonstrate the effectiveness of the proposed method.

  • CIVR - Event tactic analysis based on player and ball trajectory in Broadcast Video
    Proceedings of the 2008 international conference on Content-based image and video retrieval - CIVR '08, 2008
    Co-Authors: Guangyu Zhu, Yi Zhang, Qingming Huang
    Abstract:

    Most of existing approaches on sports Video analysis are concentrated on semantic event detection which is general audience oriented and the extracted events are presented to the audience without further analysis from tactic perspective. In this paper, we propose a novel approach to extract the tactic information and recognize the tactic patterns from goal events in Broadcast soccer Video. We extract the goal events with far-view shots using the analysis and alignment of web-casting text and Broadcast Video. For a detected goal event, a multi-object detection and tracking algorithm is employed to obtain the players and ball trajectories. An effective tactic representation called aggregate trajectory are constructed based on multi-object trajectories using a novel analysis of temporal-spatial interaction among the players and ball. The interactive relationship of aggregate trajectory and the hypothesis testing of the trajectory temporal-spatial distribution are employed to discover the tactic patterns in a hierarchical coarse-to-fine framework. The experimental results on FIFA World Cup 2006 show that the proposed approach is more effective than our previous work.

  • trajectory based event tactics analysis in Broadcast sports Video
    ACM Multimedia, 2007
    Co-Authors: Guangyu Zhu, Yong Rui, Qingming Huang, Shuqiang Jiang, Wen Gao, Hongxun Yao
    Abstract:

    Most of existing approaches on event detection in sports Video are general audience oriented. The extracted events are then presented to the audience without further analysis. However, professionals, such as soccer coaches, are more interested in the tactics used in the events. In this paper, we present a novel approach to extract tactic information from the goal event in Broadcast soccer Video and present the goal event in a tactic mode to the coaches and sports professionals. We first extract goal events with far-view shots based on analysis and alignment of web-casting text and Broadcast Video. For a detected goal event, we employ a multi-object detection and tracking algorithm to obtain the players and ball trajectories in the shot. Compared with existing work, we proposed an effective tactic representation called aggregate trajectory which is constructed based on multiple trajectories using a novel analysis of temporal-spatial interaction among the players and the ball. The interactive relationship with play region information and hypothesis testing for trajectory temporal-spatial distribution are exploited to analyze the tactic patterns in a hierarchical coarse-to-fine framework. The experimental results on the data of FIFA World Cup 2006 are promising and demonstrate our approach is effective.

Ling-yu Duan - One of the best experts on this subject based on the ideXlab platform.

  • a multimodal scheme for program segmentation and representation in Broadcast Video streams
    IEEE Transactions on Multimedia, 2008
    Co-Authors: Jinqiao Wang, Ling-yu Duan, Qingshan Liu, Jesse S. Jin
    Abstract:

    With the advance of digital Video recording and playback systems, the request for efficiently managing recorded TV Video programs is evident so that users can readily locate and browse their favorite programs. In this paper, we propose a multimodal scheme to segment and represent TV Video streams. The scheme aims to recover the temporal and structural characteristics of TV programs with visual, auditory, and textual information. In terms of visual cues, we develop a novel concept named program-oriented informative images (POIM) to identify the candidate points correlated with the boundaries of individual programs. For audio cues, a multiscale Kullback-Leibler (K-L) distance is proposed to locate audio scene changes (ASC), and accordingly ASC is aligned with Video scene changes to represent candidate boundaries of programs. In addition, latent semantic analysis (LSA) is adopted to calculate the textual content similarity (TCS) between shots to model the inter-program similarity and intra-program dissimilarity in terms of speech content. Finally, we fuse the multimodal features of POIM, ASC, and TCS to detect the boundaries of programs including individual commercials (spots). Towards effective program guide and attracting content browsing, we propose a multimodal representation of individual programs by using POIM images, key frames, and textual keywords in a summarization manner. Extensive experiments are carried out over an open benchmarking dataset TRECVID 2005 corpus and promising results have been achieved. Compared with the electronic program guide (EPG), our solution provides a more generic approach to determine the exact boundaries of diverse TV programs even including dramatic spots.

  • live sports event detection based on Broadcast Video and web casting text
    ACM Multimedia, 2006
    Co-Authors: Jinjun Wang, Kongwah Wan, Ling-yu Duan
    Abstract:

    Event detection is essential for sports Video summarization, indexing and retrieval and extensive research efforts have been devoted to this area. However, the previous approaches are heavily relying on Video content itself and require the whole Video content for event detection. Due to the semantic gap between low-level features and high-level events, it is difficult to come up with a generic framework to achieve a high accuracy of event detection. In addition, the dynamic structures from different sports domains further complicate the analysis and impede the implementation of live event detection systems. In this paper, we present a novel approach for event detection from the live sports game using web-casting text and Broadcast Video. Web-casting text is a text Broadcast source for sports game and can be live captured from the web. Incorporating web-casting text into sports Video analysis significantly improves the event detection accuracy. Compared with previous approaches, the proposed approach is able to: (1) detect live event only based on the partial content captured from the web and TV; (2) extract detailed event semantics and detect exact event boundary, which are very difficult or impossible to be handled by previous approaches; and (3) create personalized summary related to certain event, player or team according to user's preference. We present the framework of our approach and details of text analysis, Video analysis and text/Video alignment. We conducted experiments on both live games and recorded games. The results are encouraging and comparable to the manually detected events. We also give scenarios to illustrate how to apply the proposed solution to professional and consumer services.

  • PCM - A semantic image category for structuring TV Broadcast Video streams
    Advances in Multimedia Information Processing - PCM 2006, 2006
    Co-Authors: Jinqiao Wang, Ling-yu Duan, Jesse S. Jin
    Abstract:

    TV Broadcast Video stream consists of various kinds of programs such as sitcoms, news, sports, commercials, weather, etc. In this paper, we propose a semantic image category, named as Program Oriented Informative Images (POIM), to facilitate the segmentation, indexing and retrieval of different programs. The assumption is that most stations tend to insert lead-in/-out Video shots for explicitly introducing the current program and indicating the transitions between consecutive programs within TV streams. Such shots often utilize the overlapping of text, graphics, and storytelling images to create an image sequence of POIM as a visual representation for the current program. With the advance of post-editing effects, POIM is becoming an effective indicator to structure TV streams, and also is a fairly common “prop” in program content production. We have attempted to develop a POIM recognizer involving a set of global/local visual features and supervised/unsupervised learning. Comparison experiments have been carried out. A promising result, F1 = 90.2%, has been achieved on a part of TRECVID 2005 Video corpus. The recognition of POIM, together with other audiovisual features, can be used to further determine program boundaries.

  • ACM Multimedia - Live sports event detection based on Broadcast Video and web-casting text
    Proceedings of the 14th annual ACM international conference on Multimedia - MULTIMEDIA '06, 2006
    Co-Authors: Jinjun Wang, Kongwah Wan, Ling-yu Duan
    Abstract:

    Event detection is essential for sports Video summarization, indexing and retrieval and extensive research efforts have been devoted to this area. However, the previous approaches are heavily relying on Video content itself and require the whole Video content for event detection. Due to the semantic gap between low-level features and high-level events, it is difficult to come up with a generic framework to achieve a high accuracy of event detection. In addition, the dynamic structures from different sports domains further complicate the analysis and impede the implementation of live event detection systems. In this paper, we present a novel approach for event detection from the live sports game using web-casting text and Broadcast Video. Web-casting text is a text Broadcast source for sports game and can be live captured from the web. Incorporating web-casting text into sports Video analysis significantly improves the event detection accuracy. Compared with previous approaches, the proposed approach is able to: (1) detect live event only based on the partial content captured from the web and TV; (2) extract detailed event semantics and detect exact event boundary, which are very difficult or impossible to be handled by previous approaches; and (3) create personalized summary related to certain event, player or team according to user's preference. We present the framework of our approach and details of text analysis, Video analysis and text/Video alignment. We conducted experiments on both live games and recorded games. The results are encouraging and comparable to the manually detected events. We also give scenarios to illustrate how to apply the proposed solution to professional and consumer services.

Guangyu Zhu - One of the best experts on this subject based on the ideXlab platform.

  • event tactic analysis based on Broadcast sports Video
    IEEE Transactions on Multimedia, 2009
    Co-Authors: Guangyu Zhu, Yong Rui, Qingming Huang, Shuqiang Jiang, Wen Gao, Hongxun Yao
    Abstract:

    Most existing approaches on sports Video analysis have concentrated on semantic event detection. Sports professionals, however, are more interested in tactic analysis to help improve their performance. In this paper, we propose a novel approach to extract tactic information from the attack events in Broadcast soccer Video and present the events in a tactic mode to the coaches and sports professionals. We extract the attack events with far-view shots using the analysis and alignment of web-casting text and Broadcast Video. For a detected event, two tactic representations, aggregate trajectory and play region sequence, are constructed based on multi-object trajectories and field locations in the event shots. Based on the multi-object trajectories tracked in the shot, a weighted graph is constructed via the analysis of temporal-spatial interaction among the players and the ball. Using the Viterbi algorithm, the aggregate trajectory is computed based on the weighted graph. The play region sequence is obtained using the identification of the active field locations in the event based on line detection and competition network. The interactive relationship of aggregate trajectory with the information of play region and the hypothesis testing for trajectory temporal-spatial distribution are employed to discover the tactic patterns in a hierarchical coarse-to-fine framework. Extensive experiments on FIFA World Cup 2006 show that the proposed approach is highly effective.

  • Using webcast text for semantic event detection in Broadcast sports Video
    IEEE Transactions on Multimedia, 2008
    Co-Authors: Changsheng Xu, Yong Rui, Hanqing Lu, Yi-fan Zhang, Guangyu Zhu, Qingming Huang
    Abstract:

    Sports Video semantic event detection is essential for sports Video summarization and retrieval. Extensive research efforts have been devoted to this area in recent years. However, the existing sports Video event detection approaches heavily rely on either Video content itself, which face the difficulty of high-level semantic information extraction from Video content using computer vision and image processing techniques, or manually generated Video ontology, which is domain specific and difficult to be automatically aligned with the Video content. In this paper, we present a novel approach for sports Video semantic event detection based on analysis and alignment of Webcast text and Broadcast Video. Webcast text is a text Broadcast channel for sports game which is co-produced with the Broadcast Video and is easily obtained from the Web. We first analyze Webcast text to cluster and detect text events in an unsupervised way using probabilistic latent semantic analysis (pLSA). Based on the detected text event and Video structure analysis, we employ a conditional random field model (CRFM) to align text event and Video event by detecting event moment and event boundary in the Video. Incorporation of Webcast text into sports Video analysis significantly facilitates sports Video semantic event detection. We conducted experiments on 33 hours of soccer and basketball games for Webcast analysis, Broadcast Video analysis and text/Video semantic alignment. The results are encouraging and compared with the manually labeled ground truth.

  • CIVR - Event tactic analysis based on player and ball trajectory in Broadcast Video
    Proceedings of the 2008 international conference on Content-based image and video retrieval - CIVR '08, 2008
    Co-Authors: Guangyu Zhu, Yi Zhang, Qingming Huang
    Abstract:

    Most of existing approaches on sports Video analysis are concentrated on semantic event detection which is general audience oriented and the extracted events are presented to the audience without further analysis from tactic perspective. In this paper, we propose a novel approach to extract the tactic information and recognize the tactic patterns from goal events in Broadcast soccer Video. We extract the goal events with far-view shots using the analysis and alignment of web-casting text and Broadcast Video. For a detected goal event, a multi-object detection and tracking algorithm is employed to obtain the players and ball trajectories. An effective tactic representation called aggregate trajectory are constructed based on multi-object trajectories using a novel analysis of temporal-spatial interaction among the players and ball. The interactive relationship of aggregate trajectory and the hypothesis testing of the trajectory temporal-spatial distribution are employed to discover the tactic patterns in a hierarchical coarse-to-fine framework. The experimental results on FIFA World Cup 2006 show that the proposed approach is more effective than our previous work.

  • trajectory based event tactics analysis in Broadcast sports Video
    ACM Multimedia, 2007
    Co-Authors: Guangyu Zhu, Yong Rui, Qingming Huang, Shuqiang Jiang, Wen Gao, Hongxun Yao
    Abstract:

    Most of existing approaches on event detection in sports Video are general audience oriented. The extracted events are then presented to the audience without further analysis. However, professionals, such as soccer coaches, are more interested in the tactics used in the events. In this paper, we present a novel approach to extract tactic information from the goal event in Broadcast soccer Video and present the goal event in a tactic mode to the coaches and sports professionals. We first extract goal events with far-view shots based on analysis and alignment of web-casting text and Broadcast Video. For a detected goal event, we employ a multi-object detection and tracking algorithm to obtain the players and ball trajectories in the shot. Compared with existing work, we proposed an effective tactic representation called aggregate trajectory which is constructed based on multiple trajectories using a novel analysis of temporal-spatial interaction among the players and the ball. The interactive relationship with play region information and hypothesis testing for trajectory temporal-spatial distribution are exploited to analyze the tactic patterns in a hierarchical coarse-to-fine framework. The experimental results on the data of FIFA World Cup 2006 are promising and demonstrate our approach is effective.

Hongxun Yao - One of the best experts on this subject based on the ideXlab platform.

  • event tactic analysis based on Broadcast sports Video
    IEEE Transactions on Multimedia, 2009
    Co-Authors: Guangyu Zhu, Yong Rui, Qingming Huang, Shuqiang Jiang, Wen Gao, Hongxun Yao
    Abstract:

    Most existing approaches on sports Video analysis have concentrated on semantic event detection. Sports professionals, however, are more interested in tactic analysis to help improve their performance. In this paper, we propose a novel approach to extract tactic information from the attack events in Broadcast soccer Video and present the events in a tactic mode to the coaches and sports professionals. We extract the attack events with far-view shots using the analysis and alignment of web-casting text and Broadcast Video. For a detected event, two tactic representations, aggregate trajectory and play region sequence, are constructed based on multi-object trajectories and field locations in the event shots. Based on the multi-object trajectories tracked in the shot, a weighted graph is constructed via the analysis of temporal-spatial interaction among the players and the ball. Using the Viterbi algorithm, the aggregate trajectory is computed based on the weighted graph. The play region sequence is obtained using the identification of the active field locations in the event based on line detection and competition network. The interactive relationship of aggregate trajectory with the information of play region and the hypothesis testing for trajectory temporal-spatial distribution are employed to discover the tactic patterns in a hierarchical coarse-to-fine framework. Extensive experiments on FIFA World Cup 2006 show that the proposed approach is highly effective.

  • trajectory based event tactics analysis in Broadcast sports Video
    ACM Multimedia, 2007
    Co-Authors: Guangyu Zhu, Yong Rui, Qingming Huang, Shuqiang Jiang, Wen Gao, Hongxun Yao
    Abstract:

    Most of existing approaches on event detection in sports Video are general audience oriented. The extracted events are then presented to the audience without further analysis. However, professionals, such as soccer coaches, are more interested in the tactics used in the events. In this paper, we present a novel approach to extract tactic information from the goal event in Broadcast soccer Video and present the goal event in a tactic mode to the coaches and sports professionals. We first extract goal events with far-view shots based on analysis and alignment of web-casting text and Broadcast Video. For a detected goal event, we employ a multi-object detection and tracking algorithm to obtain the players and ball trajectories in the shot. Compared with existing work, we proposed an effective tactic representation called aggregate trajectory which is constructed based on multiple trajectories using a novel analysis of temporal-spatial interaction among the players and the ball. The interactive relationship with play region information and hypothesis testing for trajectory temporal-spatial distribution are exploited to analyze the tactic patterns in a hierarchical coarse-to-fine framework. The experimental results on the data of FIFA World Cup 2006 are promising and demonstrate our approach is effective.

Yong Rui - One of the best experts on this subject based on the ideXlab platform.

  • event tactic analysis based on Broadcast sports Video
    IEEE Transactions on Multimedia, 2009
    Co-Authors: Guangyu Zhu, Yong Rui, Qingming Huang, Shuqiang Jiang, Wen Gao, Hongxun Yao
    Abstract:

    Most existing approaches on sports Video analysis have concentrated on semantic event detection. Sports professionals, however, are more interested in tactic analysis to help improve their performance. In this paper, we propose a novel approach to extract tactic information from the attack events in Broadcast soccer Video and present the events in a tactic mode to the coaches and sports professionals. We extract the attack events with far-view shots using the analysis and alignment of web-casting text and Broadcast Video. For a detected event, two tactic representations, aggregate trajectory and play region sequence, are constructed based on multi-object trajectories and field locations in the event shots. Based on the multi-object trajectories tracked in the shot, a weighted graph is constructed via the analysis of temporal-spatial interaction among the players and the ball. Using the Viterbi algorithm, the aggregate trajectory is computed based on the weighted graph. The play region sequence is obtained using the identification of the active field locations in the event based on line detection and competition network. The interactive relationship of aggregate trajectory with the information of play region and the hypothesis testing for trajectory temporal-spatial distribution are employed to discover the tactic patterns in a hierarchical coarse-to-fine framework. Extensive experiments on FIFA World Cup 2006 show that the proposed approach is highly effective.

  • Using webcast text for semantic event detection in Broadcast sports Video
    IEEE Transactions on Multimedia, 2008
    Co-Authors: Changsheng Xu, Yong Rui, Hanqing Lu, Yi-fan Zhang, Guangyu Zhu, Qingming Huang
    Abstract:

    Sports Video semantic event detection is essential for sports Video summarization and retrieval. Extensive research efforts have been devoted to this area in recent years. However, the existing sports Video event detection approaches heavily rely on either Video content itself, which face the difficulty of high-level semantic information extraction from Video content using computer vision and image processing techniques, or manually generated Video ontology, which is domain specific and difficult to be automatically aligned with the Video content. In this paper, we present a novel approach for sports Video semantic event detection based on analysis and alignment of Webcast text and Broadcast Video. Webcast text is a text Broadcast channel for sports game which is co-produced with the Broadcast Video and is easily obtained from the Web. We first analyze Webcast text to cluster and detect text events in an unsupervised way using probabilistic latent semantic analysis (pLSA). Based on the detected text event and Video structure analysis, we employ a conditional random field model (CRFM) to align text event and Video event by detecting event moment and event boundary in the Video. Incorporation of Webcast text into sports Video analysis significantly facilitates sports Video semantic event detection. We conducted experiments on 33 hours of soccer and basketball games for Webcast analysis, Broadcast Video analysis and text/Video semantic alignment. The results are encouraging and compared with the manually labeled ground truth.

  • trajectory based event tactics analysis in Broadcast sports Video
    ACM Multimedia, 2007
    Co-Authors: Guangyu Zhu, Yong Rui, Qingming Huang, Shuqiang Jiang, Wen Gao, Hongxun Yao
    Abstract:

    Most of existing approaches on event detection in sports Video are general audience oriented. The extracted events are then presented to the audience without further analysis. However, professionals, such as soccer coaches, are more interested in the tactics used in the events. In this paper, we present a novel approach to extract tactic information from the goal event in Broadcast soccer Video and present the goal event in a tactic mode to the coaches and sports professionals. We first extract goal events with far-view shots based on analysis and alignment of web-casting text and Broadcast Video. For a detected goal event, we employ a multi-object detection and tracking algorithm to obtain the players and ball trajectories in the shot. Compared with existing work, we proposed an effective tactic representation called aggregate trajectory which is constructed based on multiple trajectories using a novel analysis of temporal-spatial interaction among the players and the ball. The interactive relationship with play region information and hypothesis testing for trajectory temporal-spatial distribution are exploited to analyze the tactic patterns in a hierarchical coarse-to-fine framework. The experimental results on the data of FIFA World Cup 2006 are promising and demonstrate our approach is effective.