Mutual Cooperation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 44112 Experts worldwide ranked by ideXlab platform

Szeto Kwok Yip - One of the best experts on this subject based on the ideXlab platform.

  • SAC - Sustaining Mutual Cooperation in iterated prisoner's dilemma game
    Proceedings of the 30th Annual ACM Symposium on Applied Computing, 2015
    Co-Authors: Kim Minsam, Szeto Kwok Yip
    Abstract:

    In this paper, we study the conditions under which Mutual Cooperation among one-step memory evolutionary agents instochastic iterated prisoner's dilemma game platform can be sustained. The agents evolve along the gradient of the payoff field. A metric is introduced to quantify players' ability to sustain Cooperation with other players. Numerical experiment indicates each allele's role in sustaining the cooperative relationship by hiding its weaknesses to the opponent. The results were analyzed mathematically via transforming the problem into Markov chain state value problem to obtain partial derivatives of payoff functions. Finally, possible applications of methodology and results in this paper to multi-agent systems control are discussed.

  • sustaining Mutual Cooperation in iterated prisoner s dilemma game
    ACM Symposium on Applied Computing, 2015
    Co-Authors: Kim Minsam, Szeto Kwok Yip
    Abstract:

    In this paper, we study the conditions under which Mutual Cooperation among one-step memory evolutionary agents instochastic iterated prisoner's dilemma game platform can be sustained. The agents evolve along the gradient of the payoff field. A metric is introduced to quantify players' ability to sustain Cooperation with other players. Numerical experiment indicates each allele's role in sustaining the cooperative relationship by hiding its weaknesses to the opponent. The results were analyzed mathematically via transforming the problem into Markov chain state value problem to obtain partial derivatives of payoff functions. Finally, possible applications of methodology and results in this paper to multi-agent systems control are discussed.

Kim Minsam - One of the best experts on this subject based on the ideXlab platform.

  • SAC - Sustaining Mutual Cooperation in iterated prisoner's dilemma game
    Proceedings of the 30th Annual ACM Symposium on Applied Computing, 2015
    Co-Authors: Kim Minsam, Szeto Kwok Yip
    Abstract:

    In this paper, we study the conditions under which Mutual Cooperation among one-step memory evolutionary agents instochastic iterated prisoner's dilemma game platform can be sustained. The agents evolve along the gradient of the payoff field. A metric is introduced to quantify players' ability to sustain Cooperation with other players. Numerical experiment indicates each allele's role in sustaining the cooperative relationship by hiding its weaknesses to the opponent. The results were analyzed mathematically via transforming the problem into Markov chain state value problem to obtain partial derivatives of payoff functions. Finally, possible applications of methodology and results in this paper to multi-agent systems control are discussed.

  • sustaining Mutual Cooperation in iterated prisoner s dilemma game
    ACM Symposium on Applied Computing, 2015
    Co-Authors: Kim Minsam, Szeto Kwok Yip
    Abstract:

    In this paper, we study the conditions under which Mutual Cooperation among one-step memory evolutionary agents instochastic iterated prisoner's dilemma game platform can be sustained. The agents evolve along the gradient of the payoff field. A metric is introduced to quantify players' ability to sustain Cooperation with other players. Numerical experiment indicates each allele's role in sustaining the cooperative relationship by hiding its weaknesses to the opponent. The results were analyzed mathematically via transforming the problem into Markov chain state value problem to obtain partial derivatives of payoff functions. Finally, possible applications of methodology and results in this paper to multi-agent systems control are discussed.

Robert Sibarani - One of the best experts on this subject based on the ideXlab platform.

  • Batak Toba society’s local wisdom of Mutual Cooperation in Toba Lake area: a linguistic anthropology study
    International Journal of Human Rights in Healthcare, 2018
    Co-Authors: Robert Sibarani
    Abstract:

    Purpose The purpose of this paper is to find Batak Toba society’s local wisdom of Mutual Cooperation in Toba Lake area: a linguistic anthropology study. Design/methodology/approach This research employed qualitative paradigm. As a qualitative research, it employed four methods of data collection, namely in-depth open-ended interview, direct participatory observation, focus group discussion which is often abbreviated as FGD, and written documents. In-depth and open-ended interviews were applied to obtain data from the informants who understand the local wisdom of Mutual Cooperation, the traditional expressions as the collective memory of Mutual Cooperation, and the terms of Mutual Cooperation in Batak Toba society. Findings Based on the research findings, Batak Toba society has terms for gotong royong (Mutual Cooperation). They are marsirimpa or marsirumpa (cohesive, in unison, and together). It means that the basic rule of gotong royong (Mutual Cooperation) in Batak Toba society is the cohesion, synchrony, and togetherness. In other words, gotong royong (Mutual Cooperation) in Batak Toba society is working cohesively, in unison, and together, which is practiced in the life cycles, livelihood cycles, and public works. Originality/value This paper presents a new and significant contribution to the social and economic activity, especially socio-anthropology. People do not consider the implementation of Mutual Cooperation anymore. They forget that marsirimpa (the local term for Mutual Cooperation) can be used as a non-material capital to improve the socio-economic development. Marsirimpa can improve the social activity because its main principles are based on the “solidarity” and “harmony.” This research gives contribution economically to the people in the research area (Tippang village) compared to the neighboring area (Bakkara village). People in Tippang village get better income because they believe that many works, for instances, irrigating, paddy planting, until paddy cutting should be done together; they do not need to spend money for workers. Each clan has its own representative to manage irrigation. The activities of land digging and paddy cutting are collectively done. In relation to social anthropology, the tradition around the research area is still maintained because it makes people value the social interaction.

  • batak toba society s local wisdom of Mutual Cooperation in toba lake area a linguistic anthropology study
    International Journal of Human Rights in Healthcare, 2018
    Co-Authors: Robert Sibarani
    Abstract:

    Purpose The purpose of this paper is to find Batak Toba society’s local wisdom of Mutual Cooperation in Toba Lake area: a linguistic anthropology study. Design/methodology/approach This research employed qualitative paradigm. As a qualitative research, it employed four methods of data collection, namely in-depth open-ended interview, direct participatory observation, focus group discussion which is often abbreviated as FGD, and written documents. In-depth and open-ended interviews were applied to obtain data from the informants who understand the local wisdom of Mutual Cooperation, the traditional expressions as the collective memory of Mutual Cooperation, and the terms of Mutual Cooperation in Batak Toba society. Findings Based on the research findings, Batak Toba society has terms for gotong royong (Mutual Cooperation). They are marsirimpa or marsirumpa (cohesive, in unison, and together). It means that the basic rule of gotong royong (Mutual Cooperation) in Batak Toba society is the cohesion, synchrony, and togetherness. In other words, gotong royong (Mutual Cooperation) in Batak Toba society is working cohesively, in unison, and together, which is practiced in the life cycles, livelihood cycles, and public works. Originality/value This paper presents a new and significant contribution to the social and economic activity, especially socio-anthropology. People do not consider the implementation of Mutual Cooperation anymore. They forget that marsirimpa (the local term for Mutual Cooperation) can be used as a non-material capital to improve the socio-economic development. Marsirimpa can improve the social activity because its main principles are based on the “solidarity” and “harmony.” This research gives contribution economically to the people in the research area (Tippang village) compared to the neighboring area (Bakkara village). People in Tippang village get better income because they believe that many works, for instances, irrigating, paddy planting, until paddy cutting should be done together; they do not need to spend money for workers. Each clan has its own representative to manage irrigation. The activities of land digging and paddy cutting are collectively done. In relation to social anthropology, the tradition around the research area is still maintained because it makes people value the social interaction.

Mark Stafford Smith - One of the best experts on this subject based on the ideXlab platform.

  • Modelling community interactions and social capital dynamics: The case of regional and rural communities of Australia
    Agricultural Systems, 2006
    Co-Authors: Yiheyis Maru, Ryan R. J. Mcallister, Mark Stafford Smith
    Abstract:

    Abstract Tension between dominant urban political and economic centres and associated rural communities continues despite various programs to decentralisation. This phenomenon is often explained in terms of a core-periphery political economy. To generate complementary explanatory hypotheses from a social perspective, we examine the impacts of rural–urban community interactions on the development of social norms such as the strength of Mutual Cooperation. We explore this using deliberately abstracted models of the “Iterated Prisoners’ Dilemma” to represent intra- and inter-community interactions, in regionalised and centralised interaction arrangements. We have considered changes in Mutual Cooperation as an indicator of social capital dynamics. In our model, increasing interaction of rural communities with urban centres increases “non-cooperative” behaviour of members of the small rural communities. Moreover, as the strength of centralisation (the proportion of interaction between each rural community and the urban centre as compared with the interaction among rural communities) increases, cooperative behaviour among the members of smaller rural communities and Mutual Cooperation with individuals of larger urban centre decreases. Our hypothesis is that interaction with urban centres disrupts norms used in resolving local social dilemmas. If so, this partly explains the dissatisfaction expressed in rural–urban relations, which may be a fundamental emergent property of particular settlement patterns. Understanding these relationships could guide better policies to facilitate rural–urban interactions in the worldwide trend towards regionalism.

Koichi Moriyama - One of the best experts on this subject based on the ideXlab platform.

  • The resilience of Cooperation in a Dilemma game played by reinforcement learning agents
    2017 IEEE International Conference on Agents (ICA), 2017
    Co-Authors: Koichi Moriyama, Kaori Nakase, Atsuko Mutoh, Nobuhiro Inuzuka
    Abstract:

    This work discusses what an (independent) reinforcement learning agent can do in a multiagent environment. In particular, we consider a stateless Q-learning agent in a Prisoner's Dilemma (PD) game. Although it had been shown in the literature that stateless, independent Q-learning agents had been difficult to cooperate with each other in an iterated PD (IPD) game, we gave a condition of PD payoffs and Q-learning parameters that helps the agents cooperate with each other. Based on the condition, we also discussed the ratio of Mutual Cooperation happening in IPD games. It supposed that Mutual Cooperation was fragile, i.e., one misfortune defection would have the agents slide down the spiral of Mutual defection. However, it is not always correct. Mutual Cooperation will reinforce itself and thus it will be robust and resilient. Hence, this work analytically derives how long a series of Mutual Cooperation continues once it happened while considering the resilience. It gives us further comprehension of the process of reinforcement learning in IPD games.

  • AAMAS - Cooperation-eliciting prisoner's dilemma payoffs for reinforcement learning agents
    2014
    Co-Authors: Koichi Moriyama, Satoshi Kurihara, Masayuki Numao
    Abstract:

    This work considers a stateless Q-learning agent in iterated Prisoner's Dilemma(PD). We have already given a condition of PD payoffs and Q-learning parameters that helps stateless Q-learning agents cooperate with each other. That condition, however, has a restrictive premise. This work relaxes the premise and shows a new payoff condition for Mutual Cooperation. After that, we derive the payoff relations that will elicit Mutual Cooperation from the new condition.

  • Cooperation-Eliciting Prisoner's Dilemma Payoffs for Reinforcement Learning Agents (Extended Abstract)
    2014
    Co-Authors: Koichi Moriyama, Satoshi Kurihara, Masayuki Numao
    Abstract:

    This work considers a stateless Q-learning agent in iterated Prisoner’s Dilemma (PD). We have already given a condition of PD payoffs and Q-learning parameters that helps stateless Q-learning agents cooperate with each other [2]. That condition, however, has a restrictive premise. This work relaxes the premise and shows a new payoff condition for Mutual Cooperation. After that, we derive the payoff relations that will elicit Mutual Cooperation from the new condition.

  • Utility based Q-learning to facilitate Cooperation in Prisoner's Dilemma games
    Web Intelligence and Agent Systems: An International Journal, 2009
    Co-Authors: Koichi Moriyama
    Abstract:

    This work deals with Q-learning in a multiagent environment. There are many multiagent Q-learning methods, and most of them aim to converge to a Nash equilibrium, which is not desirable in games like the Prisoner's Dilemma (PD). However, normal Q-learning agents that use a stochastic method in choosing actions to avoid local optima may yield Mutual Cooperation in a PD game. Although such Mutual Cooperation usually occurs singly, it can be facilitated if the Q-function of Cooperation becomes larger than that of defection after the Cooperation. This work derives a theorem on how many consecutive repetitions of Mutual Cooperation are needed to make the Q-function of Cooperation larger than that of defection. In addition, from the perspective of the author's previous works that discriminate utilities from rewards and use utilities for learning in PD games, this work also derives a corollary on how much utility is necessary to make the Q-function larger by one-shot Mutual Cooperation.

  • Learning-Rate Adjusting Q-Learning for Prisoner's Dilemma Games
    2008 IEEE WIC ACM International Conference on Web Intelligence and Intelligent Agent Technology, 2008
    Co-Authors: Koichi Moriyama
    Abstract:

    Many multiagent Q-learning algorithms have been proposed to date, and most of them aim to converge to a Nash equilibrium, which is not desirable in games like the Prisoner's Dilemma (PD). In the previous paper, the author proposed the utility-based Q-learning for PD, which used utilities as rewards in order to maintain Mutual Cooperation once it had occurred. However, since the agent's action depends on the relation of Q-values the agent has, the Mutual Cooperation can be maintained by adjusting the learning rate of Q-learning. Thus, in this paper, we deal with the learning rate directly and introduce a new Q-learning method called the learning-rate adjusting Q-learning, or LRA-Q.