Satisfies Condition

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 42 Experts worldwide ranked by ideXlab platform

Perry Robinson Macneille - One of the best experts on this subject based on the ideXlab platform.

  • a bayesian framework for learning rule sets for interpretable classification
    Journal of Machine Learning Research, 2017
    Co-Authors: Tong Wang, Cynthia Rudin, Finale Doshivelez, Yimin Liu, Erica Klampfl, Perry Robinson Macneille
    Abstract:

    We present a machine learning algorithm for building classifiers that are comprised of a small number of short rules. These are restricted disjunctive normal form models. An example of a classifier of this form is as follows: If X Satisfies (Condition A AND Condition B) OR (Condition C) OR ..., then Y = 1. Models of this form have the advantage of being interpretable to human experts since they produce a set of rules that concisely describe a specific class. We present two probabilistic models with prior parameters that the user can set to encourage the model to have a desired size and shape, to conform with a domain-specific definition of interpretability. We provide a scalable MAP inference approach and develop theoretical bounds to reduce computation by iteratively pruning the search space. We apply our method (Bayesian Rule Sets - BRS) to characterize and predict user behavior with respect to in-vehicle context-aware personalized recommender systems. Our method has a major advantage over classical associative classification methods and decision trees in that it does not greedily grow the model.

Tong Wang - One of the best experts on this subject based on the ideXlab platform.

  • a bayesian framework for learning rule sets for interpretable classification
    Journal of Machine Learning Research, 2017
    Co-Authors: Tong Wang, Cynthia Rudin, Finale Doshivelez, Yimin Liu, Erica Klampfl, Perry Robinson Macneille
    Abstract:

    We present a machine learning algorithm for building classifiers that are comprised of a small number of short rules. These are restricted disjunctive normal form models. An example of a classifier of this form is as follows: If X Satisfies (Condition A AND Condition B) OR (Condition C) OR ..., then Y = 1. Models of this form have the advantage of being interpretable to human experts since they produce a set of rules that concisely describe a specific class. We present two probabilistic models with prior parameters that the user can set to encourage the model to have a desired size and shape, to conform with a domain-specific definition of interpretability. We provide a scalable MAP inference approach and develop theoretical bounds to reduce computation by iteratively pruning the search space. We apply our method (Bayesian Rule Sets - BRS) to characterize and predict user behavior with respect to in-vehicle context-aware personalized recommender systems. Our method has a major advantage over classical associative classification methods and decision trees in that it does not greedily grow the model.

Yang Yongbao - One of the best experts on this subject based on the ideXlab platform.

Erica Klampfl - One of the best experts on this subject based on the ideXlab platform.

  • a bayesian framework for learning rule sets for interpretable classification
    Journal of Machine Learning Research, 2017
    Co-Authors: Tong Wang, Cynthia Rudin, Finale Doshivelez, Yimin Liu, Erica Klampfl, Perry Robinson Macneille
    Abstract:

    We present a machine learning algorithm for building classifiers that are comprised of a small number of short rules. These are restricted disjunctive normal form models. An example of a classifier of this form is as follows: If X Satisfies (Condition A AND Condition B) OR (Condition C) OR ..., then Y = 1. Models of this form have the advantage of being interpretable to human experts since they produce a set of rules that concisely describe a specific class. We present two probabilistic models with prior parameters that the user can set to encourage the model to have a desired size and shape, to conform with a domain-specific definition of interpretability. We provide a scalable MAP inference approach and develop theoretical bounds to reduce computation by iteratively pruning the search space. We apply our method (Bayesian Rule Sets - BRS) to characterize and predict user behavior with respect to in-vehicle context-aware personalized recommender systems. Our method has a major advantage over classical associative classification methods and decision trees in that it does not greedily grow the model.

Finale Doshivelez - One of the best experts on this subject based on the ideXlab platform.

  • a bayesian framework for learning rule sets for interpretable classification
    Journal of Machine Learning Research, 2017
    Co-Authors: Tong Wang, Cynthia Rudin, Finale Doshivelez, Yimin Liu, Erica Klampfl, Perry Robinson Macneille
    Abstract:

    We present a machine learning algorithm for building classifiers that are comprised of a small number of short rules. These are restricted disjunctive normal form models. An example of a classifier of this form is as follows: If X Satisfies (Condition A AND Condition B) OR (Condition C) OR ..., then Y = 1. Models of this form have the advantage of being interpretable to human experts since they produce a set of rules that concisely describe a specific class. We present two probabilistic models with prior parameters that the user can set to encourage the model to have a desired size and shape, to conform with a domain-specific definition of interpretability. We provide a scalable MAP inference approach and develop theoretical bounds to reduce computation by iteratively pruning the search space. We apply our method (Bayesian Rule Sets - BRS) to characterize and predict user behavior with respect to in-vehicle context-aware personalized recommender systems. Our method has a major advantage over classical associative classification methods and decision trees in that it does not greedily grow the model.