Information Aggregation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 73911 Experts worldwide ranked by ideXlab platform

Rahul Sami - One of the best experts on this subject based on the ideXlab platform.

  • Information Aggregation in exponential family markets
    Economics and Computation, 2014
    Co-Authors: Jacob Abernethy, Sindhu Kutty, Sebastien Lahaie, Rahul Sami
    Abstract:

    We consider the design of prediction market mechanisms known as automated market makers. We show that we can design these mechanisms via the mold of exponential family distributions, a popular and well-studied probability distribution template used in statistics. We give a full development of this relationship and explore a range of benefits. We draw connections between the Information Aggregation of market prices and the belief Aggregation of learning agents that rely on exponential family distributions. We develop a natural analysis of the market behavior as well as the price equilibrium under the assumption that the traders exhibit risk aversion according to exponential utility. We also consider similar aspects under alternative models, such as budget-constrained traders.

  • Information Aggregation in exponential family markets
    arXiv: Artificial Intelligence, 2014
    Co-Authors: Jacob Abernethy, Sindhu Kutty, Sebastien Lahaie, Rahul Sami
    Abstract:

    We consider the design of prediction market mechanisms known as automated market makers. We show that we can design these mechanisms via the mold of \emph{exponential family distributions}, a popular and well-studied probability distribution template used in statistics. We give a full development of this relationship and explore a range of benefits. We draw connections between the Information Aggregation of market prices and the belief Aggregation of learning agents that rely on exponential family distributions. We develop a very natural analysis of the market behavior as well as the price equilibrium under the assumption that the traders exhibit risk aversion according to exponential utility. We also consider similar aspects under alternative models, such as when traders are budget constrained.

  • Aggregation and manipulation in prediction markets effects of trading mechanism and Information distribution
    Management Science, 2012
    Co-Authors: Lian Jian, Rahul Sami
    Abstract:

    We conduct laboratory experiments on variants of market scoring rule prediction markets, under different Information distribution patterns, to evaluate the efficiency and speed of Information Aggregation, as well as test recent theoretical results on manipulative behavior by traders. We find that markets structured to have a fixed sequence of trades exhibit greater accuracy of Information Aggregation than the typical form that has unstructured trade. In comparing two commonly used mechanisms, we find no significant difference between the performance of the direct probability-report form and the indirect security-trading form of the market scoring rule. In the case of the markets with a structured order, we find evidence supporting the theoretical prediction that Information Aggregation is slower when Information is complementary. In structured markets, the theoretical prediction that there will be more delayed trading in complementary markets is supported, but we find no support for the prediction that there will be more bluffing in complementary markets. However, the theoretical predictions are not borne out in the unstructured markets. This paper was accepted by Brad Barber, Teck Ho, and Terrance Odean, special issue editors.

  • Aggregation and manipulation in prediction markets effects of trading mechanism and Information distribution
    Electronic Commerce, 2010
    Co-Authors: Lian Jian, Rahul Sami
    Abstract:

    We conduct laboratory experiments on variants of market scoring rules prediction markets, under different Information distribution patterns, in order to evaluate the efficiency and speed of Information Aggregation, as well as test recent theoretical results on manipulative behavior by traders. We find that markets structured to have a fixed sequence of trades exhibit greater accuracy of Information Aggregation than the typical form that has unstructured trades. Prior theoretical predictions of differing strategic behavior under complementary Information distributions and substitute Information distributions are confirmed when the trading order is structured, but not in markets with an unstructured trading order. In the case of the market with a structured order, we find that the Information Aggregation is consequently slower when Information is complementary, as traders more frequently engage in bluffing and delaying strategies. In comparing two commonly used mechanisms, we find no significant difference between the performance of the direct probability-report form and the indirect security-trading form of the market scoring rule.

Nicholas R Jennings - One of the best experts on this subject based on the ideXlab platform.

  • time sensitive bayesian Information Aggregation for crowdsourcing systems
    Journal of Artificial Intelligence Research, 2016
    Co-Authors: Matteo Venanzi, John Guiver, Pushmeet Kohli, Nicholas R Jennings
    Abstract:

    Many aspects of the design of efficient crowdsourcing processes, such as defining worker's bonuses, fair prices and time limits of the tasks, involve knowledge of the likely duration of the task at hand. In this work we introduce a new time-sensitive Bayesian Aggregation method that simultaneously estimates a task's duration and obtains reliable Aggregations of crowdsourced judgments. Our method, called BCCTime, uses latent variables to represent the uncertainty about the workers' completion time, the tasks' duration and the workers' accuracy. To relate the quality of a judgment to the time a worker spends on a task, our model assumes that each task is completed within a latent time window within which all workers with a propensity to genuinely attempt the labelling task (i.e., no spammers) are expected to submit their judgments. In contrast, workers with a lower propensity to valid labelling, such as spammers, bots or lazy labellers, are assumed to perform tasks considerably faster or slower than the time required by normal workers. Specifically, we use efficient message-passing Bayesian inference to learn approximate posterior probabilities of (i) the confusion matrix of each worker, (ii) the propensity to valid labelling of each worker, (iii) the unbiased duration of each task and (iv) the true label of each task. Using two real-world public datasets for entity linking tasks, we show that BCCTime produces up to 11% more accurate classifications and up to 100% more informative estimates of a task's duration compared to state-of-the-art methods.

  • time sensitive bayesian Information Aggregation for crowdsourcing systems
    arXiv: Artificial Intelligence, 2015
    Co-Authors: Matteo Venanzi, John Guiver, Pushmeet Kohli, Nicholas R Jennings
    Abstract:

    Crowdsourcing systems commonly face the problem of aggregating multiple judgments provided by potentially unreliable workers. In addition, several aspects of the design of efficient crowdsourcing processes, such as defining worker's bonuses, fair prices and time limits of the tasks, involve knowledge of the likely duration of the task at hand. Bringing this together, in this work we introduce a new time--sensitive Bayesian Aggregation method that simultaneously estimates a task's duration and obtains reliable Aggregations of crowdsourced judgments. Our method, called BCCTime, builds on the key insight that the time taken by a worker to perform a task is an important indicator of the likely quality of the produced judgment. To capture this, BCCTime uses latent variables to represent the uncertainty about the workers' completion time, the tasks' duration and the workers' accuracy. To relate the quality of a judgment to the time a worker spends on a task, our model assumes that each task is completed within a latent time window within which all workers with a propensity to genuinely attempt the labelling task (i.e., no spammers) are expected to submit their judgments. In contrast, workers with a lower propensity to valid labeling, such as spammers, bots or lazy labelers, are assumed to perform tasks considerably faster or slower than the time required by normal workers. Specifically, we use efficient message-passing Bayesian inference to learn approximate posterior probabilities of (i) the confusion matrix of each worker, (ii) the propensity to valid labeling of each worker, (iii) the unbiased duration of each task and (iv) the true label of each task. Using two real-world public datasets for entity linking tasks, we show that BCCTime produces up to 11% more accurate classifications and up to 100% more informative estimates of a task's duration compared to state-of-the-art methods.

Xiaoqiang Cai - One of the best experts on this subject based on the ideXlab platform.

  • intuitionistic fuzzy Information Aggregation theory and applications
    2013
    Co-Authors: Xiaoqiang Cai
    Abstract:

    "Intuitionistic Fuzzy Information Aggregation: Theory and Applications" is the first book to provide a thorough and systematic introduction to intuitionistic fuzzy Aggregation methods, the correlation, distance and similarity measures of intuitionistic fuzzy sets and various decision-making models and approaches based on the above-mentioned Information processing tools. Through numerous practical examples and illustrations with tables and figures, it offers researchers and professionals in the fields of fuzzy mathematics, Information fusion and decision analysis the most recent research findings, developed by the authors. Zeshui Xu is a Professor at the PLA University of Science and Technology, China. Xiaoqiang Cai is a Professor at the Chinese University of Hong Kong, China.

  • recent advances in intuitionistic fuzzy Information Aggregation
    Fuzzy Optimization and Decision Making, 2010
    Co-Authors: Xiaoqiang Cai
    Abstract:

    Aggregation of intuitionistic fuzzy Information is a new branch of intuitionistic fuzzy set theory, which has attracted significant interest from researchers in recent years. In this paper, we provide a survey of the Aggregation techniques of intuitionistic fuzzy Information, and their applications in various fields, such as decision making, cluster analysis, medical diagnosis, forecasting, and manufacturing grid. In addition, we analyze their characteristics and relationships. Finally, we discuss possible directions for future research in this area.

Guiwu Wei - One of the best experts on this subject based on the ideXlab platform.

  • hesitant triangular fuzzy Information Aggregation based on einstein operations and their application to multiple attribute decision making
    Expert Systems With Applications, 2014
    Co-Authors: Xiaofei Zhao, Rui Lin, Guiwu Wei
    Abstract:

    In this paper, we investigate the multiple attribute decision making (MADM) problems in which attribute values take the form of hesitant triangular fuzzy Information. Firstly, definition and some operational laws of hesitant triangular fuzzy elements and score function of hesitant triangular fuzzy elements are introduced. Then, we have developed some hesitant triangular fuzzy Aggregation operators based on the Einstein operation: the hesitant triangular fuzzy Einstein weighted averaging (HTFEWA) operator, hesitant triangular fuzzy Einstein weighted geometric (HTFEWG) operator, hesitant triangular fuzzy Einstein ordered weighted averaging (HTFEOWA) operator, hesitant triangular fuzzy Einstein ordered weighted geometric (HTFEOWG) operator, hesitant triangular fuzzy Einstein hybrid average (HTFEHA) operator and hesitant triangular fuzzy Einstein hybrid geometric (HTFEHG) operator. We have applied the hesitant triangular fuzzy Einstein weighted averaging (HTFEWA) operator, hesitant triangular fuzzy Einstein weighted geometric (HTFEWG) operators to multiple attribute decision making with hesitant triangular fuzzy Information. Finally an illustrative example has been given to show the developed method.

Jian Liu - One of the best experts on this subject based on the ideXlab platform.

  • design and risk evaluation of reliability demonstration test for hierarchical systems with multilevel Information Aggregation
    IEEE Transactions on Reliability, 2017
    Co-Authors: Weidong Zhang, Huairui Guo, Jian Liu
    Abstract:

    As reliability requirements become increasingly demanding for many engineering systems, conventional system reliability demonstration testing (SRDT) based on the number of failures depends on a large sample of system units. However, for many safety critical systems, such as missiles, it is prohibitive to perform such testing with large samples. To reduce the sample size, existing SRDT methods utilize test data from either system level or component level. In this paper, an Aggregation-based SRDT methodology is proposed for hierarchical systems by utilizing multilevel reliability Information of components, subsystems, and the overall system. Analytical conditions are identified for the proposed method to achieve lower consumer risk. The performances of different SRDT design strategies are evaluated and compared according to their consumer risks. A numerical case study is presented to illustrate the proposed methodology and demonstrate its validity and effectiveness.

  • bayesian modeling of multi state hierarchical systems with multi level Information Aggregation
    Reliability Engineering & System Safety, 2014
    Co-Authors: Jian Liu, Byoung Uk Kim
    Abstract:

    Abstract Reliability modeling of multi-state hierarchical systems is challenging because of the complex system structures and imbalanced reliability Information available at different system levels. This paper proposes a Bayesian multi-level Information Aggregation approach to model the reliability of multi-level hierarchical systems by utilizing all available reliability Information throughout the system. Cascading failure dependency among components and/or sub-systems at the same level is explicitly considered. The proposed methodology can significantly improve the accuracy of system-level reliability modeling. A case study demonstrates the effectiveness of the proposed methodology.

  • proportional hazard modeling for hierarchical systems with multi level Information Aggregation
    Iie Transactions, 2014
    Co-Authors: Jian Liu
    Abstract:

    Reliability modeling of hierarchical systems is crucial for their health management in many mission-critical industries. Conventional statistical modeling methodologies are constrained by the limited availability of reliability test data, especially when the system-level reliability tests of such systems are expensive and/or time-consuming. This article presents a semi-parametric approach to modeling system-level reliability by systematically and explicitly aggregating lower-level Information of system elements; i.e., components and/or subsystems. An innovative Bayesian inference framework is proposed to implement Information Aggregation based on the known multi-level structure of hierarchical systems and interaction relationships among their composing elements. Numerical case study results demonstrate the effectiveness of the proposed method.