Visual Discovery

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 21744 Experts worldwide ranked by ideXlab platform

Kovalerchuk Boris - One of the best experts on this subject based on the ideXlab platform.

  • Non-linear Visual Knowledge Discovery with Elliptic Paired Coordinates
    2021
    Co-Authors: Mcdonald Rose, Kovalerchuk Boris
    Abstract:

    It is challenging for humans to enable Visual knowledge Discovery in data with more than 2-3 dimensions with a naked eye. This chapter explores the efficiency of discovering predictive machine learning models interactively using new Elliptic Paired coordinates (EPC) Visualizations. It is shown that EPC are capable to Visualize multidimensional data and support Visual machine learning with preservation of multidimensional information in 2-D. Relative to parallel and radial coordinates, EPC Visualization requires only a half of the Visual elements for each n-D point. An interactive software system EllipseVis, which is developed in this work, processes high-dimensional datasets, creates EPC Visualizations, and produces predictive classification models by discovering dominance rules in EPC. By using interactive and automatic processes it discovers zones in EPC with a high dominance of a single class. The EPC methodology has been successful in discovering non-linear predictive models with high coverage and precision in the computational experiments. This can benefit multiple domains by producing Visually appealing dominance rules. This chapter presents results of successful testing the EPC non-linear methodology in experiments using real and simulated data, EPC generalized to the Dynamic Elliptic Paired Coordinates (DEPC), incorporation of the weights of coordinates to optimize the Visual Discovery, introduction of an alternative EPC design and introduction of the concept of incompact machine learning methodology based on EPC/DEPC.Comment: 29 pages, 29 figures, 12 table

  • Survey of Explainable Machine Learning with Visual and Granular Methods Beyond Quasi-Explanations
    'Springer Science and Business Media LLC', 2021
    Co-Authors: Kovalerchuk Boris, Ahmad, Muhammad Aurangzeb, Teredesai Ankur
    Abstract:

    This chapter surveys and analyses Visual methods of explainability of Machine Learning (ML) approaches with focus on moving from quasi-explanations that dominate in ML to actual domain-specific explanation supported by granular Visuals. The importance of Visual and granular methods to increase the interpretability and validity of the ML model has grown in recent years. Visuals have an appeal to human perception, which other methods do not. ML interpretation is fundamentally a human activity, not a machine activity. Thus, Visual methods are more readily interpretable. Visual granularity is a natural way for efficient ML explanation. Understanding complex causal reasoning can be beyond human abilities without “downgrading” it to human perceptual and cognitive limits. The Visual exploration of multidimensional data at different levels of granularity for knowledge Discovery is a long-standing research focus. While multiple efficient methods for Visual representation of high-dimensional data exist, the loss of interpretable information, occlusion, and clutter continue to be a challenge, which lead to quasi-explanations. This chapter starts with the motivation and the definitions of different forms of explainability and how these concepts and information granularity can integrate in ML. The chapter focuses on a clear distinction between quasi-explanations and actual domain specific explanations, as well as between potentially explainable and an actually explained ML model that are critically important for the further progress of the ML explainability domain. We discuss foundations of interpretability, overview Visual interpretability and present several types of methods to Visualize the ML models. Next, we present methods of Visual Discovery of ML models, with the focus on interpretable models, based on the recently introduced concept of General Line Coordinates (GLC). This family of methods take the critical step of creating Visual explanations that are not merely quasi-explanations but are also domain specific Visual explanations while these methods themselves are domain-agnostic. The chapter includes results on theoretical limits to preserve n-D distances in lower dimensions, based on the Johnson-Lindenstrauss lemma, point-to-point and point-to-graph GLC approaches, and real-world case studies. The chapter also covers traditional Visual methods for understanding multiple ML models, which include deep learning and time series models. We illustrate that many of these methods are quasi-explanations and need further enhancement to become actual domain specific explanations. The chapter concludes with outlining open problems and current research frontiers

  • Survey of explainable machine learning with Visual and granular methods beyond quasi-explanations
    2020
    Co-Authors: Kovalerchuk Boris, Ahmad, Muhammad Aurangzeb, Teredesai Ankur
    Abstract:

    This paper surveys Visual methods of explainability of Machine Learning (ML) with focus on moving from quasi-explanations that dominate in ML to domain-specific explanation supported by granular Visuals. ML interpretation is fundamentally a human activity and Visual methods are more readily interpretable. While efficient Visual representations of high-dimensional data exist, the loss of interpretable information, occlusion, and clutter continue to be a challenge, which lead to quasi-explanations. We start with the motivation and the different definitions of explainability. The paper focuses on a clear distinction between quasi-explanations and domain specific explanations, and between explainable and an actually explained ML model that are critically important for the explainability domain. We discuss foundations of interpretability, overview Visual interpretability and present several types of methods to Visualize the ML models. Next, we present methods of Visual Discovery of ML models, with the focus on interpretable models, based on the recently introduced concept of General Line Coordinates (GLC). These methods take the critical step of creating Visual explanations that are not merely quasi-explanations but are also domain specific Visual explanations while these methods themselves are domain-agnostic. The paper includes results on theoretical limits to preserve n-D distances in lower dimensions, based on the Johnson-Lindenstrauss lemma, point-to-point and point-to-graph GLC approaches, and real-world case studies. The paper also covers traditional Visual methods for understanding ML models, which include deep learning and time series models. We show that many of these methods are quasi-explanations and need further enhancement to become domain specific explanations. We conclude with outlining open problems and current research frontiers.Comment: 45 pages, 34 figure

  • Adjustable general line coordinates for Visual knowledge Discovery in n-D data (no. 12)
    ScholarWorks@CWU, 2017
    Co-Authors: Kovalerchuk Boris, Grishin Vladimir
    Abstract:

    Preserving all multidimensional data in two-dimensional Visualization is a long-standing problem in Visual Analytics, Machine Learning/Data Mining, and Multiobjective Pareto Optimization. While Parallel and Radial (Star) coordinates preserve all n-D data in two dimensions, they are not sufficient to address Visualization challenges of all possible datasets such as occlusion. More such methods are needed. Recently, the concepts of lossless General Line Coordinates that generalize Parallel, Radial, Cartesian, and other coordinates were proposed with initial exploration and application of several subclasses of General Line Coordinates such as Collocated Paired Coordinates and Star Collocated Paired Coordinates. This article explores and enhances benefits of General Line Coordinates. It shows the ways to increase expressiveness of General Line Coordinates including decreasing occlusion and simplifying Visual pattern while preserving all n-D data in two dimensions by adjusting General Line Coordinates for given n-D datasets. The adjustments include relocating, rescaling, and other transformations of General Line Coordinates. One of the major sources of benefits of General Line Coordinates relative to Parallel Coordinates is twice less number of point and lines in Visual representation of each n-D points. This article demonstrates the benefits of different General Line Coordinates for real data Visual analysis such as health monitoring and benchmark Iris data classification compared with results from Parallel Coordinates, Radvis, and Support Vector Machine. The experimental part of the article presents the results of the experiment with about 70 participants on efficiency of Visual pattern Discovery using Star Collocated Paired Coordinates, Parallel, and Radial Coordinates. It shows advantages of Visual Discovery of n-D patterns using General Line Coordinates subclass Star Collocated Paired Coordinates with n = 160 dimensions

Teredesai Ankur - One of the best experts on this subject based on the ideXlab platform.

  • Survey of Explainable Machine Learning with Visual and Granular Methods Beyond Quasi-Explanations
    'Springer Science and Business Media LLC', 2021
    Co-Authors: Kovalerchuk Boris, Ahmad, Muhammad Aurangzeb, Teredesai Ankur
    Abstract:

    This chapter surveys and analyses Visual methods of explainability of Machine Learning (ML) approaches with focus on moving from quasi-explanations that dominate in ML to actual domain-specific explanation supported by granular Visuals. The importance of Visual and granular methods to increase the interpretability and validity of the ML model has grown in recent years. Visuals have an appeal to human perception, which other methods do not. ML interpretation is fundamentally a human activity, not a machine activity. Thus, Visual methods are more readily interpretable. Visual granularity is a natural way for efficient ML explanation. Understanding complex causal reasoning can be beyond human abilities without “downgrading” it to human perceptual and cognitive limits. The Visual exploration of multidimensional data at different levels of granularity for knowledge Discovery is a long-standing research focus. While multiple efficient methods for Visual representation of high-dimensional data exist, the loss of interpretable information, occlusion, and clutter continue to be a challenge, which lead to quasi-explanations. This chapter starts with the motivation and the definitions of different forms of explainability and how these concepts and information granularity can integrate in ML. The chapter focuses on a clear distinction between quasi-explanations and actual domain specific explanations, as well as between potentially explainable and an actually explained ML model that are critically important for the further progress of the ML explainability domain. We discuss foundations of interpretability, overview Visual interpretability and present several types of methods to Visualize the ML models. Next, we present methods of Visual Discovery of ML models, with the focus on interpretable models, based on the recently introduced concept of General Line Coordinates (GLC). This family of methods take the critical step of creating Visual explanations that are not merely quasi-explanations but are also domain specific Visual explanations while these methods themselves are domain-agnostic. The chapter includes results on theoretical limits to preserve n-D distances in lower dimensions, based on the Johnson-Lindenstrauss lemma, point-to-point and point-to-graph GLC approaches, and real-world case studies. The chapter also covers traditional Visual methods for understanding multiple ML models, which include deep learning and time series models. We illustrate that many of these methods are quasi-explanations and need further enhancement to become actual domain specific explanations. The chapter concludes with outlining open problems and current research frontiers

  • Survey of explainable machine learning with Visual and granular methods beyond quasi-explanations
    2020
    Co-Authors: Kovalerchuk Boris, Ahmad, Muhammad Aurangzeb, Teredesai Ankur
    Abstract:

    This paper surveys Visual methods of explainability of Machine Learning (ML) with focus on moving from quasi-explanations that dominate in ML to domain-specific explanation supported by granular Visuals. ML interpretation is fundamentally a human activity and Visual methods are more readily interpretable. While efficient Visual representations of high-dimensional data exist, the loss of interpretable information, occlusion, and clutter continue to be a challenge, which lead to quasi-explanations. We start with the motivation and the different definitions of explainability. The paper focuses on a clear distinction between quasi-explanations and domain specific explanations, and between explainable and an actually explained ML model that are critically important for the explainability domain. We discuss foundations of interpretability, overview Visual interpretability and present several types of methods to Visualize the ML models. Next, we present methods of Visual Discovery of ML models, with the focus on interpretable models, based on the recently introduced concept of General Line Coordinates (GLC). These methods take the critical step of creating Visual explanations that are not merely quasi-explanations but are also domain specific Visual explanations while these methods themselves are domain-agnostic. The paper includes results on theoretical limits to preserve n-D distances in lower dimensions, based on the Johnson-Lindenstrauss lemma, point-to-point and point-to-graph GLC approaches, and real-world case studies. The paper also covers traditional Visual methods for understanding ML models, which include deep learning and time series models. We show that many of these methods are quasi-explanations and need further enhancement to become domain specific explanations. We conclude with outlining open problems and current research frontiers.Comment: 45 pages, 34 figure

Ahmad, Muhammad Aurangzeb - One of the best experts on this subject based on the ideXlab platform.

  • Survey of Explainable Machine Learning with Visual and Granular Methods Beyond Quasi-Explanations
    'Springer Science and Business Media LLC', 2021
    Co-Authors: Kovalerchuk Boris, Ahmad, Muhammad Aurangzeb, Teredesai Ankur
    Abstract:

    This chapter surveys and analyses Visual methods of explainability of Machine Learning (ML) approaches with focus on moving from quasi-explanations that dominate in ML to actual domain-specific explanation supported by granular Visuals. The importance of Visual and granular methods to increase the interpretability and validity of the ML model has grown in recent years. Visuals have an appeal to human perception, which other methods do not. ML interpretation is fundamentally a human activity, not a machine activity. Thus, Visual methods are more readily interpretable. Visual granularity is a natural way for efficient ML explanation. Understanding complex causal reasoning can be beyond human abilities without “downgrading” it to human perceptual and cognitive limits. The Visual exploration of multidimensional data at different levels of granularity for knowledge Discovery is a long-standing research focus. While multiple efficient methods for Visual representation of high-dimensional data exist, the loss of interpretable information, occlusion, and clutter continue to be a challenge, which lead to quasi-explanations. This chapter starts with the motivation and the definitions of different forms of explainability and how these concepts and information granularity can integrate in ML. The chapter focuses on a clear distinction between quasi-explanations and actual domain specific explanations, as well as between potentially explainable and an actually explained ML model that are critically important for the further progress of the ML explainability domain. We discuss foundations of interpretability, overview Visual interpretability and present several types of methods to Visualize the ML models. Next, we present methods of Visual Discovery of ML models, with the focus on interpretable models, based on the recently introduced concept of General Line Coordinates (GLC). This family of methods take the critical step of creating Visual explanations that are not merely quasi-explanations but are also domain specific Visual explanations while these methods themselves are domain-agnostic. The chapter includes results on theoretical limits to preserve n-D distances in lower dimensions, based on the Johnson-Lindenstrauss lemma, point-to-point and point-to-graph GLC approaches, and real-world case studies. The chapter also covers traditional Visual methods for understanding multiple ML models, which include deep learning and time series models. We illustrate that many of these methods are quasi-explanations and need further enhancement to become actual domain specific explanations. The chapter concludes with outlining open problems and current research frontiers

  • Survey of explainable machine learning with Visual and granular methods beyond quasi-explanations
    2020
    Co-Authors: Kovalerchuk Boris, Ahmad, Muhammad Aurangzeb, Teredesai Ankur
    Abstract:

    This paper surveys Visual methods of explainability of Machine Learning (ML) with focus on moving from quasi-explanations that dominate in ML to domain-specific explanation supported by granular Visuals. ML interpretation is fundamentally a human activity and Visual methods are more readily interpretable. While efficient Visual representations of high-dimensional data exist, the loss of interpretable information, occlusion, and clutter continue to be a challenge, which lead to quasi-explanations. We start with the motivation and the different definitions of explainability. The paper focuses on a clear distinction between quasi-explanations and domain specific explanations, and between explainable and an actually explained ML model that are critically important for the explainability domain. We discuss foundations of interpretability, overview Visual interpretability and present several types of methods to Visualize the ML models. Next, we present methods of Visual Discovery of ML models, with the focus on interpretable models, based on the recently introduced concept of General Line Coordinates (GLC). These methods take the critical step of creating Visual explanations that are not merely quasi-explanations but are also domain specific Visual explanations while these methods themselves are domain-agnostic. The paper includes results on theoretical limits to preserve n-D distances in lower dimensions, based on the Johnson-Lindenstrauss lemma, point-to-point and point-to-graph GLC approaches, and real-world case studies. The paper also covers traditional Visual methods for understanding ML models, which include deep learning and time series models. We show that many of these methods are quasi-explanations and need further enhancement to become domain specific explanations. We conclude with outlining open problems and current research frontiers.Comment: 45 pages, 34 figure

S V N Vishwanathan - One of the best experts on this subject based on the ideXlab platform.

  • adaptive personalized diversity for Visual Discovery
    Conference on Recommender Systems, 2016
    Co-Authors: Choon Hui Teo, Houssam Nassif, Daniel Hill, Sriram Srinivasan, Mitchell Goodman, Vijai Mohan, S V N Vishwanathan
    Abstract:

    Search queries are appropriate when users have explicit intent, but they perform poorly when the intent is difficult to express or if the user is simply looking to be inspired. Visual browsing systems allow e-commerce platforms to address these scenarios while offering the user an engaging shopping experience. Here we explore extensions in the direction of adaptive personalization and item diversification within Stream, a new form of Visual browsing and Discovery by Amazon. Our system presents the user with a diverse set of interesting items while adapting to user interactions. Our solution consists of three components (1) a Bayesian regression model for scoring the relevance of items while leveraging uncertainty, (2) a submodular diversification framework that re-ranks the top scoring items based on category, and (3) personalized category preferences learned from the user's behavior. When tested on live traffic, our algorithms show a strong lift in click-through-rate and session duration.

Vishwanathan Svn - One of the best experts on this subject based on the ideXlab platform.

  • Adaptive, Personalized Diversity for Visual Discovery
    'Association for Computing Machinery (ACM)', 2018
    Co-Authors: Teo, Choon Hui, Nassif Houssam, Hill Daniel, Srinavasan Sriram, Goodman Mitchell, Mohan Vijai, Vishwanathan Svn
    Abstract:

    Search queries are appropriate when users have explicit intent, but they perform poorly when the intent is difficult to express or if the user is simply looking to be inspired. Visual browsing systems allow e-commerce platforms to address these scenarios while offering the user an engaging shopping experience. Here we explore extensions in the direction of adaptive personalization and item diversification within Stream, a new form of Visual browsing and Discovery by Amazon. Our system presents the user with a diverse set of interesting items while adapting to user interactions. Our solution consists of three components (1) a Bayesian regression model for scoring the relevance of items while leveraging uncertainty, (2) a submodular diversification framework that re-ranks the top scoring items based on category, and (3) personalized category preferences learned from the user's behavior. When tested on live traffic, our algorithms show a strong lift in click-through-rate and session duration.Comment: Best Paper Awar