Bipartite Graph

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 42546 Experts worldwide ranked by ideXlab platform

Tatsuki Taniguchi - One of the best experts on this subject based on the ideXlab platform.

  • 3d se viewer a text mining tool based on Bipartite Graph visualization
    International Joint Conference on Neural Network, 2007
    Co-Authors: Shiro Usui, A Naud, Naonori Ueda, Tatsuki Taniguchi
    Abstract:

    A new interactive visualization tool is proposed for textual data mining based on Bipartite Graph visualization. Applications to three text datasets are presented to show the capability of this interactive tool to visualize complex relational information between two sets of items by embedding their Graph in a 3-dimensional space. Information extracted from texts, such as keywords, indexing terms or topics are visualized to allow interactive browsing of a field of research featured by keywords, topics or research teams. This 3-D visualization tool conveys more information than planar or linear displays of Graphs.

  • IJCNN - 3D-SE Viewer: A Text Mining Tool based on Bipartite Graph Visualization
    2007 International Joint Conference on Neural Networks, 2007
    Co-Authors: Shiro Usui, A Naud, Naonori Ueda, Tatsuki Taniguchi
    Abstract:

    A new interactive visualization tool is proposed for textual data mining based on Bipartite Graph visualization. Applications to three text datasets are presented to show the capability of this interactive tool to visualize complex relational information between two sets of items by embedding their Graph in a 3-dimensional space. Information extracted from texts, such as keywords, indexing terms or topics are visualized to allow interactive browsing of a field of research featured by keywords, topics or research teams. This 3-D visualization tool conveys more information than planar or linear displays of Graphs.

Wenjun Zeng - One of the best experts on this subject based on the ideXlab platform.

  • photo stylistic brush robust style transfer via superpixel based Bipartite Graph
    IEEE Transactions on Multimedia, 2018
    Co-Authors: Jiaying Liu, Wenhan Yang, Xiaoyan Sun, Wenjun Zeng
    Abstract:

    With the rapid development of social network and multimedia technology, customized image and video stylization have been widely used for various social-media applications. In this paper, we explore the problem of exemplar-based photo style transfer, which provides a flexible and convenient way to invoke fantastic visual impression. Rather than investigating some fixed artistic patterns to represent certain styles as was done in some previous works, our work emphasizes styles related to a series of visual effects in the photoGraph ( e.g. , color, tone, and contrast). We propose a photo stylistic brush, an automatic robust style transfer approach based on Super pixel-based BI partite G raph (SuperBIG). A two-step Bipartite Graph algorithm with different granularity levels is employed to aggregate pixels into superpixels and find their correspondences. In the first step, with the extracted hierarchical features, a Bipartite Graph is constructed to describe the content similarity for pixel partition to produce superpixels. In the second step, superpixels in the input/reference image are rematched to form a new superpixel-based Bipartite Graph, and superpixel-level correspondences are generated by Bipartite matching. Finally, the refined correspondence guides SuperBIG to perform the transformation in a decorrelated color space. Extensive experimental results demonstrate the effectiveness and robustness of the proposed method for transferring various styles of exemplar images, even for some challenging cases, such as night images.

  • photo stylistic brush robust style transfer via superpixel based Bipartite Graph
    arXiv: Computer Vision and Pattern Recognition, 2016
    Co-Authors: Jiaying Liu, Wenhan Yang, Xiaoyan Sun, Wenjun Zeng
    Abstract:

    With the rapid development of social network and multimedia technology, customized image and video stylization has been widely used for various social-media applications. In this paper, we explore the problem of exemplar-based photo style transfer, which provides a flexible and convenient way to invoke fantastic visual impression. Rather than investigating some fixed artistic patterns to represent certain styles as was done in some previous works, our work emphasizes styles related to a series of visual effects in the photoGraph, e.g. color, tone, and contrast. We propose a photo stylistic brush, an automatic robust style transfer approach based on Superpixel-based Bipartite Graph (SuperBIG). A two-step Bipartite Graph algorithm with different granularity levels is employed to aggregate pixels into superpixels and find their correspondences. In the first step, with the extracted hierarchical features, a Bipartite Graph is constructed to describe the content similarity for pixel partition to produce superpixels. In the second step, superpixels in the input/reference image are rematched to form a new superpixel-based Bipartite Graph, and superpixel-level correspondences are generated by a Bipartite matching. Finally, the refined correspondence guides SuperBIG to perform the transformation in a decorrelated color space. Extensive experimental results demonstrate the effectiveness and robustness of the proposed method for transferring various styles of exemplar images, even for some challenging cases, such as night images.

Shiro Usui - One of the best experts on this subject based on the ideXlab platform.

  • 3d se viewer a text mining tool based on Bipartite Graph visualization
    International Joint Conference on Neural Network, 2007
    Co-Authors: Shiro Usui, A Naud, Naonori Ueda, Tatsuki Taniguchi
    Abstract:

    A new interactive visualization tool is proposed for textual data mining based on Bipartite Graph visualization. Applications to three text datasets are presented to show the capability of this interactive tool to visualize complex relational information between two sets of items by embedding their Graph in a 3-dimensional space. Information extracted from texts, such as keywords, indexing terms or topics are visualized to allow interactive browsing of a field of research featured by keywords, topics or research teams. This 3-D visualization tool conveys more information than planar or linear displays of Graphs.

  • IJCNN - 3D-SE Viewer: A Text Mining Tool based on Bipartite Graph Visualization
    2007 International Joint Conference on Neural Networks, 2007
    Co-Authors: Shiro Usui, A Naud, Naonori Ueda, Tatsuki Taniguchi
    Abstract:

    A new interactive visualization tool is proposed for textual data mining based on Bipartite Graph visualization. Applications to three text datasets are presented to show the capability of this interactive tool to visualize complex relational information between two sets of items by embedding their Graph in a 3-dimensional space. Information extracted from texts, such as keywords, indexing terms or topics are visualized to allow interactive browsing of a field of research featured by keywords, topics or research teams. This 3-D visualization tool conveys more information than planar or linear displays of Graphs.

Jiaying Liu - One of the best experts on this subject based on the ideXlab platform.

  • photo stylistic brush robust style transfer via superpixel based Bipartite Graph
    IEEE Transactions on Multimedia, 2018
    Co-Authors: Jiaying Liu, Wenhan Yang, Xiaoyan Sun, Wenjun Zeng
    Abstract:

    With the rapid development of social network and multimedia technology, customized image and video stylization have been widely used for various social-media applications. In this paper, we explore the problem of exemplar-based photo style transfer, which provides a flexible and convenient way to invoke fantastic visual impression. Rather than investigating some fixed artistic patterns to represent certain styles as was done in some previous works, our work emphasizes styles related to a series of visual effects in the photoGraph ( e.g. , color, tone, and contrast). We propose a photo stylistic brush, an automatic robust style transfer approach based on Super pixel-based BI partite G raph (SuperBIG). A two-step Bipartite Graph algorithm with different granularity levels is employed to aggregate pixels into superpixels and find their correspondences. In the first step, with the extracted hierarchical features, a Bipartite Graph is constructed to describe the content similarity for pixel partition to produce superpixels. In the second step, superpixels in the input/reference image are rematched to form a new superpixel-based Bipartite Graph, and superpixel-level correspondences are generated by Bipartite matching. Finally, the refined correspondence guides SuperBIG to perform the transformation in a decorrelated color space. Extensive experimental results demonstrate the effectiveness and robustness of the proposed method for transferring various styles of exemplar images, even for some challenging cases, such as night images.

  • photo stylistic brush robust style transfer via superpixel based Bipartite Graph
    arXiv: Computer Vision and Pattern Recognition, 2016
    Co-Authors: Jiaying Liu, Wenhan Yang, Xiaoyan Sun, Wenjun Zeng
    Abstract:

    With the rapid development of social network and multimedia technology, customized image and video stylization has been widely used for various social-media applications. In this paper, we explore the problem of exemplar-based photo style transfer, which provides a flexible and convenient way to invoke fantastic visual impression. Rather than investigating some fixed artistic patterns to represent certain styles as was done in some previous works, our work emphasizes styles related to a series of visual effects in the photoGraph, e.g. color, tone, and contrast. We propose a photo stylistic brush, an automatic robust style transfer approach based on Superpixel-based Bipartite Graph (SuperBIG). A two-step Bipartite Graph algorithm with different granularity levels is employed to aggregate pixels into superpixels and find their correspondences. In the first step, with the extracted hierarchical features, a Bipartite Graph is constructed to describe the content similarity for pixel partition to produce superpixels. In the second step, superpixels in the input/reference image are rematched to form a new superpixel-based Bipartite Graph, and superpixel-level correspondences are generated by a Bipartite matching. Finally, the refined correspondence guides SuperBIG to perform the transformation in a decorrelated color space. Extensive experimental results demonstrate the effectiveness and robustness of the proposed method for transferring various styles of exemplar images, even for some challenging cases, such as night images.

Mahsa Baktashmotlagh - One of the best experts on this subject based on the ideXlab platform.

  • adversarial Bipartite Graph learning for video domain adaptation
    ACM Multimedia, 2020
    Co-Authors: Yadan Luo, Zi Huang, Zijian Wang, Zheng Zhang, Mahsa Baktashmotlagh
    Abstract:

    Domain adaptation techniques, which focus on adapting models between distributionally different domains, are rarely explored in the video recognition area due to the significant spatial and temporal shifts across the source (i.e. training) and target (i.e. test) domains. As such, recent works on visual domain adaptation which leverage adversarial learning to unify the source and target video representations and strengthen the feature transferability are not highly effective on the videos. To overcome this limitation, in this paper, we learn a domain-agnostic video classifier instead of learning domain-invariant representations, and propose an Adversarial Bipartite Graph (ABG) learning framework which directly models the source-target interactions with a network topology of the Bipartite Graph. Specifically, the source and target frames are sampled as heterogeneous vertexes while the edges connecting two types of nodes measure the affinity among them. Through message-passing, each vertex aggregates the features from its heterogeneous neighbors, forcing the features coming from the same class to be mixed evenly. Explicitly exposing the video classifier to such cross-domain representations at the training and test stages makes our model less biased to the labeled source data, which in-turn results in achieving a better generalization on the target domain. The proposed framework is agnostic to the choices of frame aggregation, and therefore, four different aggregation functions are investigated for capturing appearance and temporal dynamics. To further enhance the model capacity and testify the robustness of the proposed architecture on difficult transfer tasks, we extend our model to work in a semi-supervised setting using an additional video-level Bipartite Graph. Extensive experiments conducted on four benchmark datasets evidence the effectiveness of the proposed approach over the state-of-the-art methods on the task of video recognition.

  • adversarial Bipartite Graph learning for video domain adaptation
    arXiv: Computer Vision and Pattern Recognition, 2020
    Co-Authors: Yadan Luo, Zi Huang, Zijian Wang, Zheng Zhang, Mahsa Baktashmotlagh
    Abstract:

    Domain adaptation techniques, which focus on adapting models between distributionally different domains, are rarely explored in the video recognition area due to the significant spatial and temporal shifts across the source (i.e. training) and target (i.e. test) domains. As such, recent works on visual domain adaptation which leverage adversarial learning to unify the source and target video representations and strengthen the feature transferability are not highly effective on the videos. To overcome this limitation, in this paper, we learn a domain-agnostic video classifier instead of learning domain-invariant representations, and propose an Adversarial Bipartite Graph (ABG) learning framework which directly models the source-target interactions with a network topology of the Bipartite Graph. Specifically, the source and target frames are sampled as heterogeneous vertexes while the edges connecting two types of nodes measure the affinity among them. Through message-passing, each vertex aggregates the features from its heterogeneous neighbors, forcing the features coming from the same class to be mixed evenly. Explicitly exposing the video classifier to such cross-domain representations at the training and test stages makes our model less biased to the labeled source data, which in-turn results in achieving a better generalization on the target domain. To further enhance the model capacity and testify the robustness of the proposed architecture on difficult transfer tasks, we extend our model to work in a semi-supervised setting using an additional video-level Bipartite Graph. Extensive experiments conducted on four benchmarks evidence the effectiveness of the proposed approach over the SOTA methods on the task of video recognition.