Story-Telling

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 2247312 Experts worldwide ranked by ideXlab platform

Harald Kibbat - One of the best experts on this subject based on the ideXlab platform.

Mirella Lapata - One of the best experts on this subject based on the ideXlab platform.

  • Learning to Tell Tales: A Data-driven Approach to Story Generation
    Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2009
    Co-Authors: Neil Mcintyre, Mirella Lapata
    Abstract:

    Computational story telling has sparked great interest in artificial intelligence, partly because of its relevance to educational and gaming applications. Traditionally, story generators rely on a large repository of background knowledge containing information about the story plot and its characters. This information is detailed and usually hand crafted. In this paper we propose a data-driven approach for generating short children’s stories that does not require extensive manual involvement. We create an end-to-end system that realizes the various components of the generation pipeline stochastically. Our system follows a generate-and-and-rank approach where the space of multiple candidate stories is pruned by considering whether they are plausible, interesting, and coherent.

Colette A Granger - One of the best experts on this subject based on the ideXlab platform.

  • unexpected self expression and the limits of narrative inquiry exploring unconscious dynamics in a community based digital storytelling workshop
    International Journal of Qualitative Studies in Education, 2013
    Co-Authors: Chloe Brushwood Rose, Colette A Granger
    Abstract:

    This study explores the tension between self-knowledge and self-expression, and how it manifests in the processes of storytelling that unfold in digital storytelling workshops offered to new immigrant women living in Toronto, Canada. Both in their multi-modal complexity and in the significant shifts from their original telling, the digital stories produced seem to offer something in excess of the storyteller’s conscious intention. Here we consider what these unexpected self-expressions might mean for theories of narrative and practices of narrative inquiry: How do the unconscious dynamics of storytelling complicate our notions of narrative? How can narrative inquiry account for the unconscious? To explore these questions, we begin with a conceptual exploration of narrative and its limits and possibilities, followed by a discussion of two case studies that illustrate a range of dynamics – telling several different stories, telling a contradictory story and repeating the same story over and over.

Neil Mcintyre - One of the best experts on this subject based on the ideXlab platform.

  • Learning to Tell Tales: A Data-driven Approach to Story Generation
    Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2009
    Co-Authors: Neil Mcintyre, Mirella Lapata
    Abstract:

    Computational story telling has sparked great interest in artificial intelligence, partly because of its relevance to educational and gaming applications. Traditionally, story generators rely on a large repository of background knowledge containing information about the story plot and its characters. This information is detailed and usually hand crafted. In this paper we propose a data-driven approach for generating short children’s stories that does not require extensive manual involvement. We create an end-to-end system that realizes the various components of the generation pipeline stochastically. Our system follows a generate-and-and-rank approach where the space of multiple candidate stories is pruned by considering whether they are plausible, interesting, and coherent.

Wattenhofer Roger - One of the best experts on this subject based on the ideXlab platform.

  • Telling BERT's full story: from Local Attention to Global Aggregation
    2021
    Co-Authors: Pascual Damian, Brunner Gino, Wattenhofer Roger
    Abstract:

    We take a deep look into the behavior of self-attention heads in the transformer architecture. In light of recent work discouraging the use of attention distributions for explaining a model's behavior, we show that attention distributions can nevertheless provide insights into the local behavior of attention heads. This way, we propose a distinction between local patterns revealed by attention and global patterns that refer back to the input, and analyze BERT from both angles. We use gradient attribution to analyze how the output of an attention attention head depends on the input tokens, effectively extending the local attention-based analysis to account for the mixing of information throughout the transformer layers. We find that there is a significant discrepancy between attention and attribution distributions, caused by the mixing of context inside the model. We quantify this discrepancy and observe that interestingly, there are some patterns that persist across all layers despite the mixing.Comment: Accepted at EACL 202

  • Telling BERT's full story: from Local Attention to Global Aggregation
    2020
    Co-Authors: Pascual Damian, Brunner Gino, Wattenhofer Roger
    Abstract:

    We take a deep look into the behavior of self-attention heads in the transformer architecture. In light of recent work discouraging the use of attention distributions for explaining a model's behavior, we show that attention distributions can nevertheless provide insights into the local behavior of attention heads. This way, we propose a distinction between local patterns revealed by attention and global patterns that refer back to the input, and analyze BERT from both angles. We use gradient attribution to analyze how the output of an attention attention head depends on the input tokens, effectively extending the local attention-based analysis to account for the mixing of information throughout the transformer layers. We find that there is a significant discrepancy between attention and attribution distributions, caused by the mixing of context inside the model. We quantify this discrepancy and observe that interestingly, there are some patterns that persist across all layers despite the mixing.Comment: Preprint. Work in progres