The Academic Logbook, old posts
dinsdag 6 september 2011
All new updates
All new posts will be placed on academiclogbook.blogspot.com. I changed the address because I noticed the typo.
maandag 5 september 2011
Oliver Roy, the rest of the thesis (more or less)
There is a concept in Oliver Roy's thesis that I hadn't heard of before, and which intrigues me. It's the notion of Stackelberg solutions, which he briefly discusses in chapter 3.6.
A Stackelberg equilibrium is a strategy profile in which all players play as if their fellow players were mind readers; that is, everyone plays as if they were making the first move in an extensive game with perfect information. This naturally leads to Pareto efficient coordination in Hi-Lo games.
But what I find interesting about the concept is that Stackelberg equilibria (in dynamic systems) are to global maxima (in decision problems) as Nash equilibria are to local maxima.
To spell this out, think of game for e.g., 2 players, as a dynamic system consisting of two balls rolling around two inclined planes. The position of the first ball determines the inclination of the other plane and vice versa. You then find the Nash equilibria by looking for places to put the balls such that none of them are going to roll anywhere. Any such pair of positions will do, and we are thus content with any stationary points on the utility function (or its restriction to the boundary of the feasible set).
In a Stackelberg solution, we aren't content with any stationary point; we want them to be globally optimal. We are thus looking for the lowest stable position we can place one ball in such that the other one is also lying in its lowest stable position.
Such points may not exist. For instance, in Battle of the Sexes, there is no such equilibrium: If we place one ball in its lowest stable position, the other ball will lie a stable position, but not the lowest one.
In the Hi-Lo game he discusses, the Stackelberg outcome of playing Hi with probability x is a piecewise linear function:
In Battle of the Sexes, the utilities look quite similar, but the globally optimal strategy of one player implies that the other player must play a strategy that isn't globally optimal.
A Stackelberg equilibrium is a strategy profile in which all players play as if their fellow players were mind readers; that is, everyone plays as if they were making the first move in an extensive game with perfect information. This naturally leads to Pareto efficient coordination in Hi-Lo games.
But what I find interesting about the concept is that Stackelberg equilibria (in dynamic systems) are to global maxima (in decision problems) as Nash equilibria are to local maxima.
To spell this out, think of game for e.g., 2 players, as a dynamic system consisting of two balls rolling around two inclined planes. The position of the first ball determines the inclination of the other plane and vice versa. You then find the Nash equilibria by looking for places to put the balls such that none of them are going to roll anywhere. Any such pair of positions will do, and we are thus content with any stationary points on the utility function (or its restriction to the boundary of the feasible set).
In a Stackelberg solution, we aren't content with any stationary point; we want them to be globally optimal. We are thus looking for the lowest stable position we can place one ball in such that the other one is also lying in its lowest stable position.
Such points may not exist. For instance, in Battle of the Sexes, there is no such equilibrium: If we place one ball in its lowest stable position, the other ball will lie a stable position, but not the lowest one.
In the Hi-Lo game he discusses, the Stackelberg outcome of playing Hi with probability x is a piecewise linear function:
- u(x) = 1 -- x for 0 <= x < 1/3 (the other player plays Lo)
- u(x) = 2/3 for x = 1/3 (the other player plays whatever)
- u(x) = 2x for 1/3 < x <= 1 (the other player plays Hi)
In Battle of the Sexes, the utilities look quite similar, but the globally optimal strategy of one player implies that the other player must play a strategy that isn't globally optimal.
New bookpile
I've now triumphantly returned from the philosophy library at Singel carrying three heavy books:
- Benz, Jäger, and van Rooij (eds.): Game Theory and Pragmatics (Pelgrave Macmillan, 2006)
- Harsanyi: Rational Behavior and Bargaining Equilibrium in Games and Social Situations (Cambridge UP, 1977)
- Franke: Signal to Act (ILLC, 2009)
Complexity: 5 Questions
There is a number of handy references in Complexity: 5 Questions that I want to pursue at some point.
The first came from W. Brian Arthur, who recommends the paper "Evolutionary Phenomena in Simple Dynamics" by Kristian Lindgreen. He describes it a model of an iterated prisoners' dilemma game in which there are no equilibria (thus constant fluctuation) but apparently some sort of evolutionary adaption.
That sounds relevant. It should go on the reading list along with Rubinstein's paper on bargaining and perhaps the paper "On the Evolutionary Dynamics of Meaning/Word Associations" from Game Theory and Pragmatics (2005). The latter one is probably very relevant to the critique I have of Franke.
Speaking of evolutionary dynamics of word/meaning associations, I should look up some of the works of Luc Steels on this topic. I remember him during his ESSLLI lecture last year referring to some "mathematical proofs" of the equilibrium dynamics of the simulations they do of language at the AI lab in Bruxelles.
Also, several authors mentioned P. W. Anderson's 4-page paper "More is Different" (1972) as an important work. That might be two hours well spent sometime.
The first came from W. Brian Arthur, who recommends the paper "Evolutionary Phenomena in Simple Dynamics" by Kristian Lindgreen. He describes it a model of an iterated prisoners' dilemma game in which there are no equilibria (thus constant fluctuation) but apparently some sort of evolutionary adaption.
That sounds relevant. It should go on the reading list along with Rubinstein's paper on bargaining and perhaps the paper "On the Evolutionary Dynamics of Meaning/Word Associations" from Game Theory and Pragmatics (2005). The latter one is probably very relevant to the critique I have of Franke.
Speaking of evolutionary dynamics of word/meaning associations, I should look up some of the works of Luc Steels on this topic. I remember him during his ESSLLI lecture last year referring to some "mathematical proofs" of the equilibrium dynamics of the simulations they do of language at the AI lab in Bruxelles.
Also, several authors mentioned P. W. Anderson's 4-page paper "More is Different" (1972) as an important work. That might be two hours well spent sometime.
vrijdag 2 september 2011
Oliver Roy: Thinking Before Acting, pp. 1-50
I'm reading Oliver Roy's PhD thesis. So far, I've read the first 50 pages or so.
Roy uses a very classical picture of "instrumental reason" in which an agent's intentions are modelled as sets of outcomes. So, for instance, when I play rock-paper-scissors, I may "intend" different combinations of R, P, and S. In particular, I may "intend" losing combinations.
This is quite peculiar, but it also leads to a more versatile notion of rationalizability because players can wonder about not only the strategies of their fellow players, but also their intentions. (Such as they may not care about the outcome of the game or even play to lose.)
In chapter 2, he introduces a concept he calls "payoff-compatible intentions." This is a little convoluted and needs some unpacking:
He actually doesn't model intentions as sets of outcomes, but as a system of sets of outcomes, and he places a number of strong assumptions on those systems---essentially, he wants the system to be generated by a non-empty bottom element in the subset ordering. The intention system then coincide exactly with the system of supersets of this "most precise intention."
Using his assumptions, if you start from some intention set and then move upwards in the lattice that represents the subset ordering, you always stay inside the intention system. Being more lax about outcomes never disqualifies an intention. The other direction doesn't hold always, though: If you get more specific, you don't necessarily stay in the intention system.
A payoff-compatible intention system is then one in which you can move downwards if it doesn't lower your payoff. So, if A is a subset of B, and the outcomes in A are as good as the outcomes in B, then A will be in the intention set if B is.
In those cases, the bottom element in the intention system only consists of maximal elements with respect to the preference ordering (because smaller and better subsets are always also a part of the system). Payoff-compatible thus means very, very picky with respect to payoffs or preferences. In fact so picky that your most precise intention always only consists of the best possible outcomes. He uses this in chapter 3 to filter out Lo-Lo equilibria in Hi-Lo games.
Roy uses a very classical picture of "instrumental reason" in which an agent's intentions are modelled as sets of outcomes. So, for instance, when I play rock-paper-scissors, I may "intend" different combinations of R, P, and S. In particular, I may "intend" losing combinations.
This is quite peculiar, but it also leads to a more versatile notion of rationalizability because players can wonder about not only the strategies of their fellow players, but also their intentions. (Such as they may not care about the outcome of the game or even play to lose.)
In chapter 2, he introduces a concept he calls "payoff-compatible intentions." This is a little convoluted and needs some unpacking:
He actually doesn't model intentions as sets of outcomes, but as a system of sets of outcomes, and he places a number of strong assumptions on those systems---essentially, he wants the system to be generated by a non-empty bottom element in the subset ordering. The intention system then coincide exactly with the system of supersets of this "most precise intention."
Using his assumptions, if you start from some intention set and then move upwards in the lattice that represents the subset ordering, you always stay inside the intention system. Being more lax about outcomes never disqualifies an intention. The other direction doesn't hold always, though: If you get more specific, you don't necessarily stay in the intention system.
A payoff-compatible intention system is then one in which you can move downwards if it doesn't lower your payoff. So, if A is a subset of B, and the outcomes in A are as good as the outcomes in B, then A will be in the intention set if B is.
In those cases, the bottom element in the intention system only consists of maximal elements with respect to the preference ordering (because smaller and better subsets are always also a part of the system). Payoff-compatible thus means very, very picky with respect to payoffs or preferences. In fact so picky that your most precise intention always only consists of the best possible outcomes. He uses this in chapter 3 to filter out Lo-Lo equilibria in Hi-Lo games.
Corpora of historical English
Yesterday, I sent a mail to the Amsterdam Center for Language and Communication to ask them whether they have access to the ARCHER corpus, the Helsinki corpus, or any other corpora of historical English. I haven't heard from them yet.
My idea is to use these corpora to study historical changes in the frequency of different senses of words. So, for instance, revolution more frequently meant "circular movement" than "violent political change" until at least the 16th century. I'd like to qualify this idea by multiple quantitative studies for a paper on metaphor theory.
My idea is to use these corpora to study historical changes in the frequency of different senses of words. So, for instance, revolution more frequently meant "circular movement" than "violent political change" until at least the 16th century. I'd like to qualify this idea by multiple quantitative studies for a paper on metaphor theory.
MacKay: Information Theory, Inference, and Learning Algorithms
Yesterday, I checked whether David MacKay's book on information theory was in store at UBA, the UvA library. It wasn't.
It turns out that book suggestion are made here: http://www.uba.uva.nl/services/object.cfm/E628BB72-B3EA-4CFA-945DA8BB15976979, or through uba.uva.nl > Services > Library services A - Z > Book suggestions. I'll resquest it now.
By the way, the book is David J. C. MacKay: Information Theory, Inference, and Learning Algorithms, Cambridge U.P. 2003; ISBN 0-521-64298-1.
It turns out that book suggestion are made here: http://www.uba.uva.nl/services/object.cfm/E628BB72-B3EA-4CFA-945DA8BB15976979, or through uba.uva.nl > Services > Library services A - Z > Book suggestions. I'll resquest it now.
By the way, the book is David J. C. MacKay: Information Theory, Inference, and Learning Algorithms, Cambridge U.P. 2003; ISBN 0-521-64298-1.
Abonneren op:
Posts (Atom)