File spoon-archives/marxism-international.archive/marxism-international_1997/97-03-18.151, message 94


Date: Tue, 18 Mar 1997 08:01:49 -0800 (PST)
From: " Rahul  Mahajan" <rahul_saumik-AT-hotmail.com>
Subject: M-I: Game Theory 1 - The basic idea


I think that the question, or questions, of game theory, is of some importance
for socialists today, at least in the same sense that neoclassical economics is
(i.e., to be able to dismiss its unwarranted conclusions and use where it is
**inapplicable). It is not sufficient to dismiss it simply by saying "socialism
is not a game" or "game theory is based on an atomized notion of rationality
rather than a reality of collectives which are bound together by solidarity."

A little precis of game theory, in case anyone is unfamiliar with the subject:

The basic idea is simple enough. The universe considered is that where every
event is a game with some fixed number of players where each player chooses a
strategy and a payoff for each player is determined by the set of all
strategies chosen -- i.e., if there are n players in a game, the payoff is
given by a function which, given an n-tuple of strategies, one for each player,
gives you a set of n numbers, which are the respective payoffs for each player
-- and, of course, most important, each player tries to maximize her payoff.
The first bit of legerdemain comes in here. It is undoubtedly acceptable to
consider such a universe in the abstract, but the assumption of those who use
game theory in the social sciences (in a consistent, rather than an itsy-bitsy
eclectic "interdisciplinary" way) is that the entire universe of human actions
can be modelled thus. This in its turn is undoubtedly a hypothesis which it is
acceptable to make, as long as one defines and carries out a rigorous procedure
for testing the results of this hypothesis. Instead, however, it is often
presented as obvious, conventional wisdom, not worth arguing, in much the same
way that a country's lack of prosperity is due to labor's intransigence or to
"inflexibility" of one kind or another. Unfortunately, it is more likely than
these other truisms of late capitalism to make sense to many people on the
left. I will argue later on that, even if it one agrees to accept it merely as
a hypothesis to be tested, the lack of a clear way to test its validity leads
to a tremendous possibility for obfuscation, and very little for enlightenment.

Next, there comes a divide between two branches -- noncooperative games and
cooperative games. The distinction is simply that noncooperative games involve
no communication between the players, while cooperative games allow
communication (which can be unrestricted or partially restricted, according to
the game), thus allowing for a whole web of strategies about the strategies, if
you will. For this reason, noncooperative games are generally much simpler to
analyze. Among noncooperative games, there is a further divide, between
zero-sum and nonzero-sum games. A zero-sum game is one where, for any given set
of strategies by the players, the sum of all the payoffs is zero. In such a
game, to use the parlance of contemporary feel-good business-speak, there are
no "win-win" situations. In a two-player zero-sum game, the whole question of
any effective cooperation arising out of "rational" action is moot -- any
change which helps one player hurts the other in equal measure, so there is no
ground for anything but complete antagonism. In a many-player zero-sum game,
there would be a chance for deals to be cut, by which some sub-block of players
could increase their own total payoff, if communication were allowed, but if it
is not, again it's a fairly straightforward, mechanical process to analyze (as
long as it is possible do the necessary math). However, even very simple
two-player non-zero-sum games can be very complex to analyze fully. The
Prisoner's Dilemma is, in fact, about the simplest example there could be. In
one of the previous posts, someone already explained what it is, but I'll just
reformulate it schematically. There are two players, and each has two
strategies, Cooperate (C) or Defect (D). The payoffs are as follows: if both
cooperate, each player gets three points; if both defect, each player gets one
point; if one cooperates and one defects, the cooperator gets zero points and
the defector gets five points. It's important to note that, not only do the
payoffs not sum to zero (or to a fixed number, which would be equivalent), the
sum of the payoffs if both cooperate, six, is greater than the sum if one
cooperates and one defects, five.

Here's a pictorial representation:

                           Player 2
                     C         |        D
                   _________________________

            C   |  (3,3)       |     (0,5)
                |              |
Player 1    ________________________________

            D   |  (5,0)       |     (1,1)
                |              |  

The first number in the ordered pair is the payoff to player 1, the second the
payoff to player 2.

This game is a very primitive way to model situations which we see all around
us in daily life. The main characteristic is that if everyone cooperated,
everyone would be better off than if they all defected, but that, if most other
people are cooperating, you can fleece them by defecting, or, if most others
are defecting, you'll get fleeced if you cooperate. Examples are all around us.
For example, everyone drives in an orderly manner and waits their turn
properly, traffic will flow quickly and everyone will be happy -- however, in
such a situation, the first person to defect can get an advantage (say, by
driving along the shoulder to the exit lane, or cutting someone off), and then
things quickly disintegrate into a free-for-all where anyone who waits their
turn has to wait forever, so everyone jockeys for position, but still gets home
later than if everyone waited their turn in the first place. Another is the
reality all around us, that the oppressed people of the world would be much
better off if they rose up and took the power that belongs to them, but those
who try to get people to do it get shot, so the mass of people generally sits
quiescently while it is oppressed. This is not only a real phenomenon, it is
perhaps the primary obstacle to what we all are trying (or claim to be trying)
to achieve. So the question is to see if the game-theoretic analysis of the
Prisoner’s Dilemma gives us any insights.

The first element in the analysis, and perhaps the most important concept in
game theory, is the idea of the Nash equilibrium. A Nash equilibrium is an
n-tuple of strategies by the n players (i.e., an ordered sequence of n
strategies, the first being the strategy of the first player, etc.) such that
no single player can increase her payoff by changing only her strategy. It is
taken as an axiom by many, and it seems deceptively reasonable, that a Nash
equilibrium is somehow the most "natural" state for a game. 

If you analyze the Prisoner’s Dilemma, you see right away that the only Nash
equilibrium there is where both players defect. If either player chooses to
change from defecting to cooperating, then he will be cooperating while the
other defects, so his payoff will decrease from 1 to 0. Of course, if both
players simultaneously changed to cooperation, both would do better, but that's
not within the definition of a Nash equilibrium, since it's defined with
respect to only a single player's changing her strategy at any time. On the
other hand, the configuration where both cooperate is not a Nash equilibrium
and is inherently unstable, because if either player chooses to change her
strategy from cooperation to defection, she will increase her payoff from 3 to
5, since she will be defecting while the other cooperates. So, at first blush,
game theory seems to tell you that the rational solution involves each person
getting 1 when they could just as easily get 3 each.

One could suggest an alternative mode of analysis, however -- assume that
people are not only rational actors, they expect others to be rational actors,
and expect others to expect others to be rational actors, etc. Now, clearly the
natural configuration of strategies in the Prisoner’s Dilemma is symmetric
between the two players because the whole game has that symmetry. Of the two
symmetrical configurations, clearly the better one is where both cooperate and
each gets 3, rather than the one where both defect and each gets 1. Since one's
opponent is also rational, he must know that, and know that you know that, so
you should cooperate and so should he, and you'll both do better.

So, even for a one-shot Prisoner’s Dilemma, there's a question about what
result should emerge from the analysis. Many people would claim that a Nash
equilibrium is natural in the sense that, since it's stable, the configuration
should tend toward it (tending in what sense, of course, need not be addressed
too clearly). I'll address this claim later. 

When you add in the possibility of repetition of the game, things get more
interesting. An experiment was done in the early '80's, which was alluded to on
the list, some time earlier. The idea was to have a competition between a bunch
of programs that would play the Prisoner’s Dilemma against each other. Each
pair of programs would play a whole bunch of games (say 100) against each
other, and then each would move on to play other programs in the same way. At
the end of the day, the program that had the highest total number of points
would be the winner. Each program was a set of rules for how to pick one's
strategy against a given opponent. For example, one program might be cooperate
until the first time that the opponent defects, then keep on defecting. Another
might be always cooperate or always defect. Well, when this was done, the
results were very interesting. All kinds of whiz kids wrote horribly
complicated programs trying to do convoluted analyses of all past, present, and
future behavior, using every gimmick they could think of (much like stock
analysts, for example), but the program that won the competition was a very
simple one, called TIT-FOR-TAT (much like the fact that the average chimpanzee
can forecast stocks better than most analysts). The program was as follows:
cooperate the first time thereafter do what your opponent did on its previous
move, i.e., keep cooperating until the other program defects, then defect and
keep on defecting until the other program cooperates, then switch back to
cooperation, etc. Not only is this program about the simplest thing you can
think of, it has the unique feature that it can never win in an extended match
(the set of 100 games between two programs is what I'm calling an extended
match). The easiest way to see this is to note that, since it starts
cooperating, it is the first to get stung, if the other program defects at any
point. Thereafter, it may recoup its losses by defecting while the other
cooperates, but, if that happens, TIT-FOR-TAT starts cooperating again and so
can get stung again. So the best it can do is break even. Anyway, even though
it loses or ties every extended match, it beat the hell out of the competition.
This is not some kind of paradox -- it did so by being better at getting other
programs to cooperate than any of the other programs were.

There's the basic info. My analysis will follow in another post.

Rahul

---------------------------------------------------------
Get Your *Web-Based* Free Email at http://www.hotmail.com
---------------------------------------------------------


     --- from list marxism-international-AT-lists.village.virginia.edu ---


   

Driftline Main Page

 

Display software: ArchTracker © Malgosia Askanas, 2000-2005