We model an agent who stubbornly underestimates how much his behavior is driven by undesirable motives, and, attributing his behavior to other considerations, updates his view about those considerations. We study general properties of the model, and then apply the framework to identify novel implications of partially naive present bias. In many stable situations, a partially naive present-biased agent appears realistic in that he eventually predicts his behavior well. His unrealistic self-view does, however, manifest itself in several other ways. First, in basic settings he always comes to act in a more present-biased manner than a realistic agent. Second, he systematically mispredicts how he will react when circumstances change, such as when incentives for forward-looking behavior increase or he is placed in a new, ex-ante identical environment. Third, he follows empirically realistic addiction-like consumption dynamics that he does not anticipate. Fourth, he holds beliefs that — when
Micro Theory Seminar
We explore a novel model where authors of heterogeneous papers submit to ranked journals with admission standards on noisy referee evaluations. Journal caliber reflects paper quality in a rational expectations equilibrium. In our main finding, journal rejection rates first rise and then fall in caliber, and so cannot be used to rank journals. The logic by extension applies to college rankings. This paper therefore invalidates using selectivity to rank journals and colleges. Our theory holds for all signals obeying a novel log-concavity condition that is typically met.
A patient seller decides whether to build a reputation for exerting high effort in front of a sequence of consumers. Each consumer decides whether to trust the seller after she observes the number of times that the seller took each of his actions in the last K periods, but not the order with which these actions were taken. I show that (i) the seller’s payoff from building a reputation is at least his commitment payoff for all K and in all equilibria, and (ii) the seller sustains his reputation for exert high effort in all equilibria if and only if K is below some cutoff. Although a larger K allows more consumers to observe the seller’s opportunistic behavior, it weakens their incentives to punish the seller after they observe opportunistic behavior. This effect undermines the seller’s reputational incentives and lowers consumers’ welfare.
The classic wisdom-of-the-crowd problem asks how a principal can “aggregate” information about the unknown state of the world from agents without understanding the information structure among them. We propose a new simple procedure called Population-Mean-Based Aggregation to achieve this goal. The procedure only requires eliciting agents’ beliefs about the state, and also eliciting some agents’ expectations of the average belief in the population. We show that this procedure fully aggregates information: in an infinite population, it always infers the true state of the world. The procedure can accommodate correlations in agents’ information, misspecified beliefs, any finite number of possible states of the world, and only requires very weak assumptions on the information structure.
Over the last two decades we have developed good understanding how to quantify the impact of strategic user behavior on outcomes in many games (including traffic routing and online auctions) and showed that the resulting bounds extend to repeated games assuming players use a form of no-regret learning to adapt to the environment. Unfortunately, these results do not apply when outcomes in one round effect the game in the future, as is the case in many applications. In this talk, we study this phenomenon in the context of a game modeling queuing systems: routers compete for servers, where packets that do not get served need to be resent, resulting in a system where the number of packets at each round depends on the success of the routers in the previous rounds. In joint work with Jason Gaitonde, we analyze the resulting highly dependent random process. [...]
We present an equilibrium model of politics in which political platforms compete over public opinion. A platform consists of a policy, a coalition of social groups with diverse intrinsic attitudes to policies, and a narrative. We conceptualize narratives as subjective models that attribute a commonly valued outcome to (potentially spurious) postulated causes. When quantifi…ed against empirical observations, these models generate a shared belief among coalition members over the outcome as a function of its postulated causes. The intensity of this belief and the members’intrinsic attitudes to the policy determine the strength of the coalition’s mobilization. Only platforms that generate maximal mobilization prevail in equilibrium. Our equilibrium characterization demonstrates how false narratives can be detrimental for the common good, and how political fragmentation leads to their proliferation.
We study how long-lived, rational, exponentially discounting agents learn in a social network. In every period, each agent observes the past actions of his neighbors, receives a private signal, and chooses an action with the objective of matching the state. Since agents behave strategically, and since their actions depend on higher order beliefs, it is difficult to characterize equilibrium behavior. Nevertheless, we show that regardless of the size and shape of the network, and the patience of the agents, the speed of learning in any equilibrium is bounded from above by a constant that only depends on the private signal distribution.
Posterior implementation is a solution concept for mechanism design with interdependent values. It requires that each agent’s strategy is optimal against the strategies of other agents for every possible message profile. Green and Laffont (1987) give a geometric characterization of posterior implementable social choice functions for binary collective decision problems with two agents and non-transferable utility. This paper generalizes the analysis to any finite number n of agents, with three main insights. First, posterior implementable social choice functions are posterior implementable by score voting: each agent submits a number from a set of consecutive integers; the collective decision is determined by whether or not the sum exceeds a given quota. Second, the possibility for posterior implementation depends crucially on the number of agents: in generic environments with n ≥ 3 agents, a (responsive) social choice function is posterior implementable [...]
This paper studies learning from multiple informed agents where each agent has a small piece of information about the unknown state of the world in the form of a noisy signal and sends a message to the principal, who then makes a decision that is not constrained by predetermined rules. In contrast to the existing literature, I model the conflict of interest between the principal and the agents more generally and consider the case where the preferences of the principal and the agents are misaligned in some realized states. I show that if the conflict of interest between the principal and the agents is moderate, there is a discontinuity: when the number of agents is large enough, adding even a tiny probability of misaligned states leads to complete unraveling in which the agents ignore their signals, in contrast to the almost complete revealing that is predicted by the existing literature. [...]
I study strategic information disclosure in networks. When agents' preferences are sufficiently diverse, the optimal network is the line in which the agents are ordered according to their ideologies. Such optimal networks obtain as Nash equilibria of a game in which each link requires sponsorship by both connected agents, and are the unique strongly pairwise stable networks. These results overturns classical results of non-strategic information transmission in networks, where the optimal and pairwise stable network is the star. In political economy environments such as networks of policy-makers, interest groups, or judges, these results suggest positive and normative rationales for "horizontal" links between like-minded agents in political networks, as opposed to hierarchical networks, that have been shown to be optimal in organizations where agents' preferences are more closely aligned.
We study a common-value auction in which a large number of identical, indivisible object are sold to a large number of ex-ante identical bidders with unit demand. Bidders are initially uninformed but can acquire information from multiple sources that differ in accuracy and cost. We define a cost-accuracy ratio for each available source of information. The minimum value of this cost-accuracy ratio among all information sources fully determines the limit price distribution and the information content of the auction's price. Information is aggregated if and only if the minimum cost-accuracy ratio is equal to zero. We also characterize all equilibria of the auction for posterior separable information costs with a sufficiently rich set of experiments. In this case information is aggregated if and only if the cost function is differentiable at the prior.
We study how to optimally design non-market mechanisms for allocating scarce resources, taking into consideration agents' investment incentives. A principal wishes to allocate a resource of homogeneous quality, such as seats in a university, to a heterogeneous population of agents. She commits ex-ante to a possibly random allocation rule, contingent on a unidimensional characteristic of the agents she intrinsically values. The principal cannot resort to monetary transfers. Agents have a strict preference for allocation and can undertake a costly investment to improve their characteristic before it is revealed to the principal. We show that while random allocation rules have the effect of encouraging investment, especially at the top of the characteristic distribution, deterministic pass-fail allocation rules, such as exams with a pass grade, prove to be optimal.
The paper analyzes information sharing in neutral mechanisms when an informed party will face future interactions with an uninformed party. Neutral mechanisms are mechanisms that do not rely on (1) the provision of evidence, (2) conducting experiments, (3) verifying the state, or (4) changing the after-game (i.e., the available choices and payoffs of future interactions). They include cheap talk, long cheap talk, noisy communication, mediation, money burning, and transfer schemes, among other mechanisms. To address this question, the paper develops a reduced-form approach that characterizes the agents’ payoffs in terms of belief-based utilities. This effectively induces a psychological game, where the psychological preferences summarize information-sharing incentives. [...]
A decision-maker must accept or reject a privately informed agent. The agent always wants to be accepted, while the decision-maker wants to accept only a subset of types. The decision-maker has access to a set of feasible tests and, prior to making a decision, requires the agent to choose a test from a menu, which is a subset of the feasible tests. By offering a menu, the decision-maker can use the agent's choice as an additional source of information. I characterise the decision-maker's optimal menu for arbitrary type structures and feasible tests. I then apply this characterisation to various environments. When the domain of feasible tests contains a most informative test, I characterise when only the dominant test is offered and when a dominated test is part of the optimal menu. I also characterise the optimal menu when types are multidimensional or when tests vary in their difficulty.
I study how organizations assign tasks to identify the best candidate to promote among a pool of workers. When only non-routine tasks are informative about a worker’s potential and non-routine tasks are scarce, the organization’s preferred promotion system is an index contest. Each worker is assigned a number that depends only on his own potential. The principal delegates the non-routine task to the worker whose current index is the highest and promotes the first worker whose type exceeds a threshold. Each worker’s threshold depends only on his own type. In this environment, task allocation and workers’ motivation interact through the organization’s promotion decisions. The organization designs the workers’ careers to both screen and develop talent. So competition is mediated by the allocation of tasks: who gets the opportunity to prove themselves is a determinant factor in promotions. [...]