19 December 2010

On costly monitoring model

What I found to be the most interesting model we studied in macroecon was Williamson’s model of costly monitoring. In the model, some entrepreneurs with high auditing cost do not receive credit from the bank, resulting in a failure of Pareto optimality. Could the government resolve this information problem?

What the government can do that the bank assumedly cannot is that it can “punish” the entrepreneur for lying. The bank in any contract can at most take all the money that the entrepreneur has (and the entrepreneur begins with no capital), while the government can impose additional cost to the entrepreneurs through imprisonment, etc. This allows the government to use mixed strategy; if the additional cost is high enough, the government can select a few random entrepreneurs to audit and still make the expected cost high enough for entrepreneurs to always tell the truth. In this case, the bank can make contract with all entrepreneurs without loss.

Of course, if the government is credibly committed, then the cost of punishment is irrelevant, since all entrepreneurs will be telling the truth and thus the government would not need to actually punish anyone.

How should the government collect its tax to pay for the auditing? I guess the answer is not so obvious since now the bank may use a different contract. This will be for later.

24 October 2010

Week 4 in Review

I wish I didn’t skip week 3 review. Now I don’t remember what I learned that week.

1. Economic Analysis

The problem set was a killer. I had to solve models, find data, graph data, and read articles. It was vey time consuming, but on the other hand it was somewhat enjoyable. The lectures also have become little more interesting as models are less arithmetic and more controversial. We covered social security and public debt, discussed Ricardian equivalence and fiscal multiplier.

2. Game Theory

We finished discussing Bernoulli function and began to talk about mixed strategy equilibrium. I am stating to think that game theory notations are ugly. The lectures have been more or less standard, but I am looking forward to the proof of existence of a Nash equilibrium for any game.

3. Statistics

Finished the chapter on joint distribution and began the one on expected value. Joint distribution was somewhat difficult; I feel I need to review my multivariable calculus. I was introduced to St. Petersburg paradox and realized that gambling contributed quite a lot to human knowledge.

4. Real Analysis

The midterm was the most difficult exam I have done so far. There were three parts to the exam, and I didn’t even get to read the third part. Anyways, finished the chapter on integration.

11 October 2010

Week 2 in Review

1. Economic Analysis

I am increasingly inclined to think that macroeconomics isn’t as fun as microeconomics. What’s the big deal about Laffer curve, anyway? (I am a fan of Martin Gardner.) I do enjoy the algebra, though. Solving the first order conditions and deriving the steady-state consumption level is like doing a Sudoku puzzle; mind-numbingly fun.

Anyways, the point of this post was to review what I learned, like “lump-sun taxes are non-distortionary and consumption crowding-out” or “proportional-tax model predicts that the revenue maximizing tax rate is equal to the labor share in production which is empirically around 2/3”. Yawn.

2. Game Theory

The challenge problem of this week was to derive a condition analogous to WARP on a choice function such that a choice function generated by a quasi preference is satisfies the condition and a choice function satisfying the condition is quasi preference generated. A quasi rational preference is transitive and irreflexive, such as “xPy if x-1>y” where x,y are integers. Of course, I have not solved the problem (yet).

We also covered Gibbard-Satterthwaite theorem and started to build the formal concept of game theory.

3. Statistics

We departed with probability and entered the world of random variable and discussed several important discrete random variables, like the Bernoulli, binomial, geometric and Poisson. I should review these.

4. Real Analysis

The “fun” problem of the first assignment had nothing to do with real analysis, but was still interesting: A set of points in a plane and a subset of all the line segments connecting the points are given. Each point represents a switch. If a switch is flipped, then all other switches connected to that switch (via the line segments) are flipped. Initially all the switches are turned off. Prove you can turn on all the switches.

Apparently the problem can be brought into the realm of linear algebra and solved, but it seems there is a different approach as well. I was thinking about using Euler characteristic on planar graphs. I am yet to see if this goes anywhere.

We covered the definition of Riemann integral and the intuition of Lebesgue integral.

03 October 2010

Week 1 in Review

I had an ambitious goal of keeping a daily journal of what I learned, but my ambition died the first day and I decided to keep a brief weekly journal instead.

1. Economic Analysis

First lecture was a review. The main point of interest was that the amounts of labor and consumption are equal in a single-sector economy (Robinson Crusoe economy) and in a two-sector economy (decentralized market economy) given the same preference and technology. The natural question is, of course, why that is true. I frankly couldn’t answer the question and embarrassingly I still have not found a satisfactory answer. A point I need to think a little more.

The second lecture was the beginning of a tax analysis, and since we did not finish the model’s analysis, I have an excuse to delay a log on it.

2. Game Theory

The lectures were very satisfying in that they provided mathematically vigorous definitions of preference relation and choice function. Ever since I took the real analysis class, I wanted some exposure to mathematical rigor in economics, and this class seems to provide just that at the right level. The core of the first week’s lectures was proof of Houthakker’s theorem, which is interesting enough that I may write a post about it later.

3. Statistics

So far, we only had somewhat boring lectures on probability. There was one interesting question given during the class, which was pretty elementary but still quite confusing: There are 49 balls in a jar, each numbered from 1 to 49. You choose 7 numbers and 6 balls are drawn. What is the probability that exactly 3 balls drawn show the numbers you chose? There are 49C6 ways in which 6 balls are drawn from the jar. Of those 6, we want 3 balls to have numbers you chose; there are 7C3 ways that 3 drawn balls match with the 7 chosen numbers. The remaining 3 balls should have numbers you have not chosen and there are 42C3 ways in which 3 balls do not match the numbers. So the probability is (7C3)(42C3)/(49C6), which is around 0.0287. The curious part of the problem is that one can think in terms of numbers you choose instead of balls you draw; this yields the probability of (6C3)(43C4)/(49C7) which is seemingly different from the previous one, but is in fact the same.

4. Real Analysis

I thought the second quarter would be better and I was wrong. This is the class that I feel very lost during the lecture so I should really review the materials on a daily basis, but I have not even done that. So far, we have proved that differentiable implies continuous, chain rule, mean value theorem, and l’Hopital’s rule. Sadly, I don’t think I fully understood any of them. The problem set is also discouraging and I have not touched it yet, so not much to discuss yet.

19 September 2010

The Classroom Game

For those who do not subscribe to Mankiw’s blog, here is a blog post by a Canadian economics professor.

The professor asks a math question in the class, and gives two possible answers. Students who believe the first answer is correct raise their hands. Then students who believe the second answer is correct raise theirs. To the professor’s disappointment, about 75% of the students choose the wrong answer, while only about 10% choose the right one.

In the way the response is measured, however, it may not be that 75% of students are wrong, 10% right, and 5% uncertain. A similar voting result could have been achieved when 40% are wrong, 30% are right, and the remaining 30% are uncertain.

For the simplicity of the argument, let’s suppose that all students choose to respond. A student has an incentive to choose the right answer since his answer signals his intelligence. His signal, however, depends also on how other students have answered. It doesn’t feel good to be wrong, but at least you aren’t so embarrassed if most students are wrong as well.

We can think of a game in which students first make decision based on what they believe is the right answer, and then change the decision after seeing the initial response (there might be a strategic reason why students tend to avoid the first row). A student believes that the chance that the first answer is correct is p and the chance that the second answer is correct is 1-p. His initial response will be to choose the first answer if p > 1-p and to choose the second answer if p < 1-p (if p =.5, then the student chooses either one).

Once every student made the initial response, each student will decide whether to change his response or not based on his payoff:

 

The majority of other students

A student

 

Right Answer

Wrong answer

Right answer

a

b

Wrong answer

c

d

The best case is that the student chose the right answer while the majority of the other students chose the wrong answer. The second best is to choose the right answer while others did too. The worst case is to choose the wrong answer while others chose the right one. So for all students: b > a > d > c.

The key insight is that for most students, the loss of being in the wrong minority is greater than the gain of being in the right minority. Being in the minority attracts greater attention, and thus the signal effect is magnified. To be the wrong minority, therefore, gives a strong signal of lacking intelligence. On the other hand, being in the right minority can give a strong signal of intelligence, but this signal has a negative effect of getting enmity from other students, especially in a class where grade is curved.

Let’s have an example. Peter thinks that the second answer is right, but he isn’t completely sure. For Peter, p = 0.2 and 1-p = 0.8. His payoff is as follows:

 

The majority of other students

Peter

 

Right Answer

Wrong answer

Right answer

4

6

Wrong answer

-11

0

Initially, he does not raise his hand for the first answer since 0.8 > 0.2. It happens, however, that most students have raised their hands. If he remains firm, then there is a 0.2 chance that the majority is right while he is wrong and 0.8 chance that the majority is wrong while he is right, so his expected payoff is (0.3)(-11) + (0.7)(6) = 0.9. If changes his response, then his expected payoff is (0.3)(4) + (0.7)(0) = 1.2, so he changes his mind and raises his hand for the first answer, although he doesn’t think it is correct.

The result of this is an exaggeration in the voting. A student may vote for the first answer, even if he thinks the second answer is more likely to be correct, when more than half of the class vote for the first. Conversely, a student who thinks that the first answer is correct may not vote for it if only a few other students have voted for it.

When 40% are wrong, 30% are right, and 30% are uncertain, we could expect around 55% of students to initially choose the wrong answer. This will attract more students to choose the wrong answer, especially the remaining 15% uncertain students. Depending on the level of certainty and individual payoffs, even those who are right may choose to vote for he wrong answer or hesitate to vote for the right answer. Thus, this situation can easily end up with 75% voting for the wrong answer and 10% for the right.

09 September 2010

The ‘Guess My Number’ Game

I have recently begun studying programming with Michael Dawson’s Python Programming for the Absolute Beginner. One of the practices in the book is programming ‘Guess My Number’ game, in which the player chooses an integer between 1 and 100 and the computer tries to guess that number (the game in which the role of player and computer is reversed is an easier one).  After the computer makes a guess, the player indicates whether the chosen number is higher or lower than the guess, and the computer makes a next guess until the guess is correct.

In the hindsight this was a fairly simple exercise, but it took me quite a while to figure out that I need to keep track of the lower and upper bounds in order to make a reasonable guess. So for example, if the program guesses 47 and the number is higher, then the program sets 47 as the lower bound, so its next guess will be greater than 47.

First I programmed so that the computer takes the average of the lower and upper bounds to make a guess (it drops all the decimals, so its initial guess, after taking the average of 101 and 0, is 50). For a randomly chosen number, this program will take an average of 5.8 trials and I think this is the lowest possible, but the verification will require another post.

The problem, however, is that the player will easily find out the pattern, so after playing a few games, he or she will choose the number that requires the most number of trials. For example, if the player chooses 100, then the program will always take 7 trials to make the correct guess. So once the player figure out the program’s algorithm, the program will make an average of 7 guesses before it hits the right number.

This is pretty obvious, but a formality of game theory might be helpful. The games played is sequential. There are two players, the computer and the human. The computer prefers lower average number of trials, while the human prefers the reverse. The computer’s set of strategies is algorithms of guessing; the human’s is choosing a number from 1 to 100. The game is sequential, in which the computer plays first and the human plays next (this implies that the human knows what strategy the computer chooses before he chooses the number).

So the question is: What is the best strategy for the computer? I thought that any strategy that does not involve randomness could be easily exploited by the human, so I made a second program that guesses a random value between the lower and upper bounds.

What would be the average number of trials for this second program? First I will need to determine whether the player will prefer to choose certain number over others. I am already stuck here, but my guess is that either all values have the same average number of trials or 50 has the highest one (since it has the greatest potential for fluctuations). If either were true, then we only need to calculate the average number of trials for guessing 50.

It didn’t sound too hard at first, but I was wrong. All I bothered to figure out was that the probability of guessing 50 in exactly 2 trials is 1.376% (Well, I also calculated that the chance of guessing 100 in exactly 2 trials is 5.177%, which seems to support that 50 indeed has the highest average number of trials). I was still curious enough to try 20 games and find the average number of trials to be 8.45. It seems choosing the random number does not seem to be a good strategy for the computer even for repeated games.

Is taking the median of lower and upper bounds the best strategy for the program? I still do not have a confident answer, but it seems so. One could add the complexity by allowing the computer to use a mixed strategy. For example, if for any given game there is a 80% chance that the program will use the median strategy and 20% chance the random strategy, it may change the payer’s best response to the program’s advantage.

The game can become even more complex by allowing the program to change its strategy for each game based on the player’s previous choices of the number. For example, if the player keeps choosing 100, then the program might make 100 as its initial guess the next game.

Finally, player could be allowed to lie for certain penalty. The player will then need to choose whether to lie or to be honest each turn, and the program will need some mechanism to determine the likelihood of the lying. This could in fact be an engaging strategy game, despite its seeming simplicity.

16 July 2010

On Nudge

If I have a high expectation of a book due to hearing many praises of the book, would I end up liking the book more or less? The authors of Nudge may suggest that due to our tendency to conform to what other people do, I would value the book more highly for reasons involving the information conveyed by other people’s judgments and the peer pressure. Well, I don’t have peer pressure in the context I am in now, so that’s maybe why I was somewhat disappointed.

The book is easy to read and yet by no means shallow. I strongly agree with the authors on many points, especially on marriage and education. The book presents a strong justification of what the authors call liberal paternalism. Many applications of Nudge seem appealing.

Yet I had a difficult time finishing the book. Some examples and concepts, although very interesting and even inspiring, appeared to be too stretched to make connections with the book’s main points. I felt that the book was jumping here and there, sometimes digressing to wilderness and sometimes coming back to the same point over and over. The flow of the book was less than ideal.

I was also little concerned with the recurring dichotomy of Humans and Econs, the irrational and rational aspects of an individual. I don’t think even the classical economists necessarily assume that individuals always behave rationally, that they make complex mathematical calculations to behave optimally. Rather, they would claim that people’s economic (and other) behaviors can be explained through rational and mathematical models. I thought that many of the cases in which people fail to behave optimally were due to lack of information rather than due to irrational aspects of human nature.

With all that, I do think that the principles of Nudge presents a great possibility for improvements in all kinds of things as the authors claim. I think one of the bonus nudges, discouraging college students from using trays in cafeteria, can be easily employed in UChicago.

07 July 2010

On Freakonomics

It is slightly embarrassing to confess that I have not read Freakonomics until recently, but on the other hand I am somewhat glad that I delayed reading until I had at least some exposure to economics. I believe having some knowledge in economics allowed me to appreciate the book more than I would have otherwise. Many thoughts came to my mind while reading the book, and I’ve been intending to scribble those thoughts down for quite a few days now (Why did I not do immediately after finishing the book, when my thoughts were more vivid? Ask a behavioral economist).

The Q&A section of the book suggests that the chapter on abortion and crime rate was the most controversial one, so I’ll start with that. The gist of the argument in that chapter is that legalizing abortion causes crime rate to go down. The argument does not imply that abortion should be legalized (abortion might be categorically impermissible and even if you believe in consequentialism, you could argue that human life is worth more than reduced crime rate), but one can see why this argument faces unfriendly reaction.

Nevertheless, I think there is more than just the sensitivity of the subject matter that makes the argument controversial. Even if you think abortion should be legalized and are comfortable with the idea of quantifying the value of human life (the authors provide 100 fetuses to 1 new born ratio as an example of relative value of human lives to measure the efficiency of trade-off between abortion and crime rate), the argument is disturbing. Consider the secondary causes of crime rate reduction that the authors suggest, namely the increased number of prisons and police (essentially greater incentive to not commit crime). As authors themselves write, these factors do not “address the root causes of crime.” It almost seems that the only way to get rid of crime is to get rid of criminals before they are literally born. Of course this is not necessarily true since there can be other plausible solutions, but the authors at least suggest that the end we achieved (reduced crime rate) is not the result of the means we wanted. I think this has a significant implication on politics and more specifically liberal ideology, but I will not venture to discuss those here. Economics alone is sufficiently dismal.

What I personally find little more troublesome is the chapter on parenting. The main argument of the chapter is that “it isn’t so much a matter of what you do as a parent; it’s who you are.” This claim, however, comes with a lot of ‘but’s. To start off, it does matter what you do if you are doing bad: “Clearly, bad parenting matters a great deal” (Otherwise it would be difficult to explain why legalizing abortion reduces crime rate). So if you are beating up your children, your children will be affected, but if you are intending to do good, then those actions have no influence.

But there is another catch. The ELCS data show that what you do as a parent doesn’t affect your child’s school performance, not necessarily your child’s whole life. Authors write: “since most parents would agree that education lies at the core of a child’s formation, it would make sense to begin by examining a telling set of school data.” Fair enough, but the story not only begins at the school performance, but it also ends there. So I am convinced that having lots of books in the house or having Mozart music playing all the time won’t improve the child’s school performance, but I am not fully convinced that they don’t matter. The smartness of a child, measured by IQ, probably is the main determining factor of a child’s school performance especially in the early years, and I am willing admit that better neighborhood or museum trips won’t improve the child’s IQ. But as the authors state, school performance is “a useful but fairly narrow measurement… poor testing in early childhood isn’t necessarily a great harbinger of future earnings, creativity, or happiness.” And then the chapter ends with a brief mention of Sacerdote’s research that shows “the influence of the adoptive parents … made the difference [on children’s higher education and career].”

As I believe I exaggerated a little here, the whole chapter on the parenting doesn’t seem to have a strong message. What you do as a parent doesn’t affect your children’s IQs, but it could affect their future careers. I can agree that reading to your children won’t affect their school performances, but I am not convinced it doesn’t matter.

This was all in all a fun and interesting book to read, and I am excited to read Superfreakonomics, but coming up next is Nudge.