Machine Intelligence

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 52737 Experts worldwide ranked by ideXlab platform

Marcus Hutter - One of the best experts on this subject based on the ideXlab platform.

  • Tests of Machine Intelligence
    arXiv: Artificial Intelligence, 2007
    Co-Authors: Shane Legg, Marcus Hutter
    Abstract:

    Although the definition and measurement of Intelligence is clearly of fundamental importance to the field of artificial Intelligence, no general survey of definitions and tests of Machine Intelligence exists. Indeed few researchers are even aware of alternatives to the Turing test and its many derivatives. In this paper we fill this gap by providing a short survey of the many tests of Machine Intelligence that have been proposed.

  • Universal Intelligence: A definition of Machine Intelligence
    Minds and Machines, 2007
    Co-Authors: Stephen Legg, Shane Legg, Marcus Hutter
    Abstract:

    A fundamental problem in artificial Intelligence is that nobody really knows what Intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human Intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of Intelligence for arbitrary Machines. We believe that this equation formally captures the concept of Machine Intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of Intelligence that have been proposed for Machines.

  • 50 Years of Artificial Intelligence - Tests of Machine Intelligence
    50 Years of Artificial Intelligence, 2007
    Co-Authors: Shane Legg, Marcus Hutter
    Abstract:

    Although the definition and measurement of Intelligence is clearly of fundamental importance to the field of artificial Intelligence, no general survey of definitions and tests of Machine Intelligence exists. Indeed few researchers are even aware of alternatives to the Turing test and its many derivatives. In this paper we fill this gap by providing a short survey of the many tests of Machine Intelligence that have been proposed.

  • A Formal Measure of Machine Intelligence
    arXiv: Artificial Intelligence, 2006
    Co-Authors: Shane Legg, Marcus Hutter
    Abstract:

    A fundamental problem in artificial Intelligence is that nobody really knows what Intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human Intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of Intelligence for arbitrary Machines. We believe that this measure formally captures the concept of Machine Intelligence in the broadest reasonable sense.

Shane Legg - One of the best experts on this subject based on the ideXlab platform.

  • Tests of Machine Intelligence
    arXiv: Artificial Intelligence, 2007
    Co-Authors: Shane Legg, Marcus Hutter
    Abstract:

    Although the definition and measurement of Intelligence is clearly of fundamental importance to the field of artificial Intelligence, no general survey of definitions and tests of Machine Intelligence exists. Indeed few researchers are even aware of alternatives to the Turing test and its many derivatives. In this paper we fill this gap by providing a short survey of the many tests of Machine Intelligence that have been proposed.

  • Universal Intelligence: A definition of Machine Intelligence
    Minds and Machines, 2007
    Co-Authors: Stephen Legg, Shane Legg, Marcus Hutter
    Abstract:

    A fundamental problem in artificial Intelligence is that nobody really knows what Intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human Intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of Intelligence for arbitrary Machines. We believe that this equation formally captures the concept of Machine Intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of Intelligence that have been proposed for Machines.

  • 50 Years of Artificial Intelligence - Tests of Machine Intelligence
    50 Years of Artificial Intelligence, 2007
    Co-Authors: Shane Legg, Marcus Hutter
    Abstract:

    Although the definition and measurement of Intelligence is clearly of fundamental importance to the field of artificial Intelligence, no general survey of definitions and tests of Machine Intelligence exists. Indeed few researchers are even aware of alternatives to the Turing test and its many derivatives. In this paper we fill this gap by providing a short survey of the many tests of Machine Intelligence that have been proposed.

  • A Formal Measure of Machine Intelligence
    arXiv: Artificial Intelligence, 2006
    Co-Authors: Shane Legg, Marcus Hutter
    Abstract:

    A fundamental problem in artificial Intelligence is that nobody really knows what Intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human Intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of Intelligence for arbitrary Machines. We believe that this measure formally captures the concept of Machine Intelligence in the broadest reasonable sense.

Lotfi A. Zadeh - One of the best experts on this subject based on the ideXlab platform.

  • Toward Human Level Machine Intelligence – Is it Achievable? The Need for a Paradigm Shift
    2009
    Co-Authors: Lotfi A. Zadeh
    Abstract:

    Officially, AI was born in 1956. Since then, very impressive progress has been made in many areas – but not in the realm of human level Machine Intelligence. Anyone who has been forced to use a dumb automated customer service system will readily agree. The Turing Test lies far beyond. Today, no Machine can pass the Turing Test and none is likely to do so in the foreseeable future. During much of its early history, AI was rife with exaggerated expectations. A headline in an article published in the late forties of last century was headlined, “Electric brain capable of translating foreign languages is being built.” Today, more than half a century later, we do have translation software, but nothing that can approach the quality of human translation. Clearly, achievement of human level Machine Intelligence is a challenge that is hard to meet. Humans have many remarkable capabilities; there are two that stand out in importance. First, the capability to reason, converse and make rational decisions in an environment of imprecision, uncertainty, incompleteness of information, partiality of truth and possibility. And second, the capability to perform a wide variety of physical and mental tasks without any measurements and any computations. A prerequisite to achievement of human level Machine Intelligence is mechanization of these capabilities and, in particular, mechanization of natural language understanding. In my view, mechanization of these capabilities is beyond the reach of the armamentarium of AI – an armamentarium which in large measure is based on classical, Aristotelian, bivalent logic and bivalent-logic-based probability theory. To make significant progress toward achievement of human level Machine Intelligence a paradigm shift is needed. More specifically, what is needed is an addition to the armamentarium of AI of two methodologies: (a) a nontraditional methodology of computing with words (CW) or more generally, NL-Computation; and (b) a countertraditional methodology which involves a progression from computing with numbers to computing with words. The centerpiece of these methodologies is the concept of precisiation of meaning. Addition of these methodologies to AI would be an important step toward the achievement of human level Machine Intelligence and its applications in decision-making, pattern recognition, analysis of evidence, diagnosis and assessment of causality. Such applications have a position of centrality in our infocentric society.

  • toward human level Machine Intelligence is it achievable the need for a paradigm shift
    Acta Technica Jaurinensis, 2009
    Co-Authors: Lotfi A. Zadeh
    Abstract:

    Officially, AI was born in 1956. Since then, very impressive progress has been made in many areas – but not in the realm of human level Machine Intelligence. Anyone who has been forced to use a dumb automated customer service system will readily agree. The Turing Test lies far beyond. Today, no Machine can pass the Turing Test and none is likely to do so in the foreseeable future. During much of its early history, AI was rife with exaggerated expectations. A headline in an article published in the late forties of last century was headlined, “Electric brain capable of translating foreign languages is being built.” Today, more than half a century later, we do have translation software, but nothing that can approach the quality of human translation. Clearly, achievement of human level Machine Intelligence is a challenge that is hard to meet. Humans have many remarkable capabilities; there are two that stand out in importance. First, the capability to reason, converse and make rational decisions in an environment of imprecision, uncertainty, incompleteness of information, partiality of truth and possibility. And second, the capability to perform a wide variety of physical and mental tasks without any measurements and any computations. A prerequisite to achievement of human level Machine Intelligence is mechanization of these capabilities and, in particular, mechanization of natural language understanding. In my view, mechanization of these capabilities is beyond the reach of the armamentarium of AI – an armamentarium which in large measure is based on classical, Aristotelian, bivalent logic and bivalent-logic-based probability theory. To make significant progress toward achievement of human level Machine Intelligence a paradigm shift is needed. More specifically, what is needed is an addition to the armamentarium of AI of two methodologies: (a) a nontraditional methodology of computing with words (CW) or more generally, NL-Computation; and (b) a countertraditional methodology which involves a progression from computing with numbers to computing with words. The centerpiece of these methodologies is the concept of precisiation of meaning. Addition of these methodologies to AI would be an important step toward the achievement of human level Machine Intelligence and its applications in decision-making, pattern recognition, analysis of evidence, diagnosis and assessment of causality. Such applications have a position of centrality in our infocentric society.

  • Plenary lecture I: toward human-level Machine Intelligence
    2008
    Co-Authors: Lotfi A. Zadeh
    Abstract:

    Achievement of human-level Machine Intelligence has profound implications for modern society-- a society which is becoming increasingly infocentric in its quest for efficiency, convenience and enhancement of quality of life. Humans have many remarkable capabilities. Among them a capability that stands out in importance is the human ability to perform a wide variety of physical and mental tasks without any measurements and any computations, based on perceptions of distance, speed, direction, intent, likelihood and other attributes of physical and mental objects. A familiar example is driving a car in city traffic. Mechanization of this ability is a challenging objective of Machine Intelligence. Science deals not with reality but with models of reality. In large measure, models of reality in scientific theories are based on classical, Aristotelian, bivalent logic. The brilliant successes of science are visible to all. But when we take a closer look, what we see is that alongside the brilliant successes there are areas where achievement of human-level Machine Intelligence is still a distant objective. We cannot write programs that can summarize a book. We cannot automate driving a car in heavy city traffic. And we are far from being able to construct systems which can understand natural language. Why is the achievement of human-level Machine Intelligence a distant objective? What is widely unrecognized is that one of the principal reasons is the fundamental conflict between the precision of bivalent logic and imprecision of the real world. In the world of bivalent logic, every proposition is either true or false, with no shades of truth allowed. In the real world, as perceived by humans, most propositions are true to a degree. Humans have a remarkable capability to reason and make rational decisions in an environment of imprecision, uncertainty, incompleteness of information and partiality of truth. It is this capability that is beyond the reach of bivalent logic--a logic which is intolerant of imprecision and partial truth. A much better fit to the real world is fuzzy logic. In fuzzy logic, everything is or is allowed to be graduated, that is, be a matter of degree or, equivalently, fuzzy. Furthermore, in fuzzy logic everything is or is allowed to be granulated, with a granule being a clump of elements drawn together by indistinguishability, similarity, proximity or functionality. Graduation and granulation play key roles in the ways in which humans deal with complexity and imprecision. In this connection, it should be noted that, in large measure, fuzzy logic is inspired by the ways in which humans deal with complexity, imprecision and partiality of truth. It is in this sense that fuzzy logic is human-centric. In coming years, fuzzy logic and fuzzy-logic-based methods are likely to play increasingly important roles in achievement of human-level Machine Intelligence. In addition, soft computing is certain to grow in visibility and importance. Basically, soft computing is a coalition of methodologies which in one way or another are directed at the development of better models of reality, human reasoning, risk assessment and decision making. This is the primary motivation for soft computing--a coalition of fuzzy logic, neurocomputing, evolutionary computing, probabilistic computing and Machine learning. The guiding principle of soft computing is that, in general, better results can be achieved through the use of constituent methodologies of soft computing in combination rather than in a stand-alone mode.

  • IEEE ICCI - Toward human level Machine Intelligence - is it achievable?
    2008 7th IEEE International Conference on Cognitive Informatics, 2008
    Co-Authors: Lotfi A. Zadeh
    Abstract:

    Achievement of human level Machine Intelligence has long been one of the basic objectives of AI. Officially, AI was born in 1956. Since then, very impressive progress has been made in many areas - but not in the realm of human level Machine Intelligence. Anyone who has been forced to use a dumb automated customer service system will readily agree. The Turing test lies far beyond. Today, no Machine can pass the Turing test and none is likely to do so in the foreseeable future.To make progress toward achievement of human level Machine Intelligence, AI must add to its armamentarium concepts and techniques drawn from other methodologies, especially evolutionary computing, neurocomputing and fuzzy logic. A key contribution of fuzzy logic is the Machinery of Computing with Words (CW) and, more generally, NL-Computation. This Machinery opens the door to mechanization of natural language understanding and computation with information described in natural language. Addition of this Machinery to the armamentarium of AI would be an important step toward the achievement of human level Machine Intelligence and its applications in decision making, pattern recognition, analysis of evidence, diagnosis and assessment of causality. Such applications have a position of centrality in our info-centric society.

  • Toward Human Level Machine Intelligence - Is It Achievable? The Need for a Paradigm Shift
    IEEE Computational Intelligence Magazine, 2008
    Co-Authors: Lotfi A. Zadeh
    Abstract:

    Officially, AI was born in 1956. Since then, very impressive progress has been made in many areas - but not in the realm of human level Machine Intelligence. During much of its early history, AI "was rife "with exaggerated expectations. A headline in an article published in the late forties of last century was headlined, "Electric brain capable of translating foreign languages is being built". Today, more than half a century later, we do have translation software, but nothing that can approach the quality of human translation. Clearly, achievement of human level Machine Intelligence is a challenge that is hard to meet. A prerequisite to achievement of human level Machine Intelligence is mechanization of these capabilities and, in particular, mechanization of natural language understanding. To make significant progress toward achievement of human level Machine Intelligence, a paradigm shift is needed. More specifically, what is needed is an addition to the armamentarium of AI of two methodologies: (a) a nontraditional methodology of computing with words (CW) or more generally, NL-Computation; and (b) a countertraditional methodology "which involves a progression from computing with numbers to computing with words. The centerpiece of these methodologies is the concept of precisiation of meaning. Addition of these methodologies to AI would be an important step toward the achievement of human level Machine Intelligence and its applications in decision-making, pattern recognition, analysis of evidence, diagnosis, and assessment of causality. Such applications have a position of centrality in our infocentric society.

Chi-jen Lin - One of the best experts on this subject based on the ideXlab platform.

  • Complementary Machine Intelligence and human Intelligence in virtual teaching assistant for tutoring program tracing
    Computers & Education, 2011
    Co-Authors: Chih-yueh Chou, Bau-hung Huang, Chi-jen Lin
    Abstract:

    This study proposes a virtual teaching assistant (VTA) to share teacher tutoring tasks in helping students practice program tracing and proposes two mechanisms of complementing Machine Intelligence and human Intelligence to develop the VTA. The first mechanism applies Machine Intelligence to extend human Intelligence (teacher answers) to evaluate the correctness of student program tracing answers, to locate student errors, and to generate hints to indicate errors. The second mechanism applies Machine Intelligence to reuse human Intelligence (previous hints that the teacher gave to other students in a similar error situation) to provide program-specific hints. Two evaluations were conducted with 85 and 64 participants, respectively. The evaluation results showed that the system helped above 89% of students correct their errors. The error-indicating hints generated by the first mechanism help students correct more than half of errors. Each teacher-generated hint was reused averagely three times by the second mechanism. The results also revealed that some error situations occurred frequently and occupied a major occurred percentage of student error situations. In sum, the VTA and these two mechanisms reduce teacher tutoring load and reduce the complexity of developing Machine Intelligence.

Sankar K. Pal - One of the best experts on this subject based on the ideXlab platform.

  • Proceedings of the 2nd International Conference on Perception and Machine Intelligence
    2015
    Co-Authors: Sankar K. Pal, Malay K. Kundu, Santanu Chaudhury, Soma Mitra, Debasis Mazumdar
    Abstract:

    Exploration of the science of human perception forms a major area of research that encompasses psychology, neurobiology and other branches of cognitive science. Machine Intelligence (MI), on the other hand, is an established area of research where multimodal information obtained from the environment via interacting sensors of a Machine is processed and analysed in a human like fashion. In recent time trend is being observed that fusion of this two key topics i.e. MI and perception Engineering are becoming prominent with a basic aim to develop newer technologies for the development of new kind of Machine having human like behaviour. The "2nd International Conference on Perception and Machine Intelligence" or briefly, "PerMIn-15" is being organized by C-DAC, Kolkata, India with technical and knowledge support from the Machine Intelligence Unit (MIU) Indian Statistical Institute, Kolkata with an aim to present the state of the art in scientific results, technological achievements in these areas.

  • Fuzzy sets in pattern recognition and Machine Intelligence
    Fuzzy Sets and Systems, 2005
    Co-Authors: Sushmita Mitra, Sankar K. Pal
    Abstract:

    Fuzzy sets constitute the oldest and most reported soft computing paradigm. They are well-suited to modeling different forms of uncertainties and ambiguities, often encountered in real life. Integration of fuzzy sets with other soft computing tools has lead to the generation of more powerful, intelligent and efficient systems. In this position paper we seek to outline the contribution of fuzzy sets to pattern recognition, image processing, and Machine Intelligence over the last 40 years.