Micro Theory Seminar & BGSE Workshop
Rooms and dates for the winter term 2024/25
- BGSE Workshop: room 0.017, wednesday at 12 p.m.
- Micro Theory Seminar: faculty room, wednesday at 4.30 p.m.
Wednesdays 16:30 - 18:00, Faculty Room
Adenauerallee 24-42, 53113 Bonn
Wednesdays 12:00 - 13:00, Room 0.017
Adenauerallee 24-42, 53113 Bonn
Winter Term 2024/25
Fair allocations: competitive equilibrium versus Nash welfare
We consider fair division problems where a set of common goods or resources have to be allocated to a group of agents. We focus on the setting when the goods are divisible and no monetary transfers are allowed. Two classical fair allocation approaches are Competitive Equilibrium with Equal Incomes (CEEI), and maximising Nash welfare. In the case of degree one homogeneous utility functions, these two solution concepts coincide. However, they can lead to different outcomes in general; the talk will focus on the relationship between these two concepts. In particular, we introduce a broad class of utility functions, called Gales-substitutes, and show that the two concepts are closely related in this class.
A Theory of Self-Prospection
A present-biased decision maker (DM) faces a two-armed bandit problem whose risky arm generates random payoffs at exponentially distributed times. The DM learns about payoff arrivals through informative feedback. At the unique stationary Markov perfect equilibrium of the multi-self game, positive feedback supports greater equilibrium welfare than both negative and transparent feedback. Regardless of the form of feedback, the DM's behavior exhibits indecision, deriving from their desire to procrastinate. We relate our findings to the theory of self-prospection -- the process of imagining future goals and outcomes when seeking motivation in the present.
Algorithmic Decision Processes
We develop a full-fledged analysis of an algorithmic decision process that, in a multialternative choice problem, produces computable choice probabilities and expected decision times through a sequence of noisy and time-consuming binary comparisons.
Similarity of Information in Games
We propose a new class of stochastic orders to compare the interdependence of joint distributions, that can be used to study the effect of increasing information similarity among players in a game. The orders named “Concentration along the Diagonal” (CAD) capture the intuitive idea that more similar information means that conditional on receiving information, each agent believes that it is now more likely that others have also received the same information. We show that for canonical binary action, symmetric, separable games, and symmetric pure-strategy Bayes–Nash equilibrium, increasing similarity of information in the CAD order is equivalent to expanding (shrinking) the equilibrium set when the game exhibits strategic complementarity (substitutability).
Deep Learning to Play Games
We train two neural networks adversarially to play normal-form games. At each iteration, a row and column network take a new randomly generated game and output individual mixed strategies. The parameters of each network are independently updated via stochastic gradient descent to minimize expected regret given the opponent’s strategy. Our simulations demonstrate that the joint behavior of the networks converges to strategies close to Nash equilibria in almost all games. For all 2 × 2 and in 80% of 3 × 3 games with multiple equilibria, the networks select the risk-dominant equilibrium. Our results show how Nash equilibrium emerges from learning across heterogeneous games.
The Design and Price of Influence
A sender with private preferences would like to influence a receiver’s action by providing information in the form of a statistical test. The technology for information production is controlled by a monopolist intermediary, who offers a menu of tests and prices to screen the sender’s type, possibly including a “threat” test to punish nonparticipation. We characterize the intermediary’s optimal screening menu and the associated distortions, which we show can benefit the receiver. We compare the sale of persuasive information with other forms of influence—overt bribery and controlling access.
Title
Abstract
Title
Abstract
Title
Abstract
Title
Abstract
Title
Abstract
Title
Abstract
Winter Term 2024/25
Fully Self-Justifiable Outcomes
McLennan (1985) defined a justifiable equilibrium as a sequential equilibrium with beliefs consistent with an iterative process of exclusion of implausible actions. While justifiability is intuitive, it has limited selection power and is not clearly related to other equilibrium notions. We introduce a stronger concept, called fully self-justifiable outcome, which requires support by justifiable equilibria independently of the order of the exclusion of actions that are implausible under the given outcome. We argue that full self-justifiability has a significant selection power: Fully self-justifiable outcomes are supported by both justifiable and forward induction equilibria (Cho, 1987) and, in signaling games, are universally divine (Banks and Sobel, 1987) and pass the Intuitive Criterion, D1, D2, and NWBR (Cho and Kreps, 1987). We show that sequentially stable outcomes (Dilmé, 2024c) are fully self-justifiable, ensuring that fully self-justifiable outcomes always exist and can be used to obtain stable behavior.
Hard information design
Many transactions in the marketplace rely on hard (or verifiable) information about the underlying value of the intended exchange, typically through certification--- housing, diamonds, bonds being cases in point. What is the class of Pareto efficient certifications for such scenarios? This paper studies the canonical monopolistic screening problem, and models certification as hard information produced through a test to be flexibly chosen pre-trade. It argues that Pareto efficient tests take a simple form---they produce certification with a partitional structure, often with one or two thresholds. This claim is shown to be true for both the linear trading model and the non-linear pricing model.
Optimal Bilateral Trade with Interdependent Values
I study a market for 'lemons' from the perspective of mechanism design in a bilateral trade setup. The closed-form solution for the seller-optimal mechanism under one-sided private information is provided. I show that a seller can disclose the quality of the goods by controlling the supply of her goods; high-quality sellers want their goods to be scarce and expensive and low-quality sellers abundant and cheap. In this way, sellers can differentiate their products from each other and maximize their payoffs. I extend this model to two-sided private information and give a novel characterization of the seller-optimal mechanism in this setup. It turns out that if there is two-sided asymmetric information, then the seller finds it optimal to engage in price signalling instead of quantity signaling. This is the least-cost way for the seller to signal her private information to the buyer.
Market Design for Distributional Objectives in Allocation Problems: An Axiomatic Approach
We study an extension of the standard priority-based allocation model (Gale and Shapley, 1962; Abdulkadiroğlu and Sönmez, 2003), by assuming that there are distributional objectives. The common solution concept of eliminating priority violations may no longer be feasible given the distributional targets. Hence, we study a constrained optimal solution of minimizing priority violations subject to meeting the distributional constraints and other basic axioms. We show that Deferred Acceptance coupled with a particular reserves- and-quotas-based choice rule minimizes priority violations among all mechanisms meeting the axioms. Moreover, the identified mechanism is the unique one with these properties in an important special case of our problem. Our paper introduces an axiomatic approach for solving a fundamental problem in market design with distributional objectives, and provides a one-of-a-kind theoretical justification for Deferred Acceptance and reserves and quotas systems in these problems.
Calibrated Forecasting and Persuasion
How should an expert send forecasts to maximize her utility subject to passing a calibration test? We consider a dynamic game where an expert sends probability forecasts to a decision maker. The decision maker uses a calibration test based on past outcomes to verify the expert’s forecasts. We characterize the optimal calibrated forecasting strategy by reducing the dynamic game to a static persuasion problem. A distribution of forecasts is feasible if and only if it is a mean-preserving contraction of the distribution of conditionals (honest forecasts). We characterize the value of information by comparing what an informed and uninformed expert can attain. Moreover, we consider a decision maker who uses regret minimization, instead of the calibration test, to take action. We show that an expert can always guarantee the calibration benchmark against a regret minimizer, and in some instances, she can guarantee strictly more.
Flexible Moral Hazard Problems with Adverse Selection
We study a moral hazard problem where a principal aims to incentivize an agent who can directly control the output distribution under a moment-based cost function. The agent is risk-neutral, protected by limited liability, while possessing private information about his cost. Deviating from classical models, not only can the principal motivate the agent to exert certain level of aggregate efforts by designing the 'power' of the contracts, she can also regulate the support of the chosen output distributions by designing the 'range' of the contract. We study the basic properties of the optimal menu and show that at the optimal menu, either a single full-range contract is provided, or the optimal low-type contract exclude some high outputs, or the optimal high-type contract excludes some low outputs. We provide sufficient and necessary conditions on when the optimal menu consists of a single full-range contract when the effort function is convex, and show that this condition is also sufficient with general effort functions.
Title
Abstract
Title
Abstract
Title
Abstract
Title
Abstract
Title
Abstract
Title
Abstract
Title
Abstract
Title
Abstract
Summer Term 2024
Contractual Chains
This paper develops a model of private bilateral contracting, in which an exogenous network determines the pairs of players who can communicate and contract with each other. After contracting, the players interact in an underlying game with globally verifiable productive actions and externally enforced transfers. The paper investigates whether such decentralized contracting can internalize externalities that arise due to parties being unable to contract directly with others whose productive actions affect their payoffs. The contract-formation protocol, called the “contracting institution,” is treated as a design element. The main result is positive: There is a contracting institution that yields efficient equilibria for any underlying game and connected network. A critical property is that the institution allows for sequential contract formation or revision. The equilibrium construction features assurance contracts and cancellation penalties.
Deterrence of Unwanted Behavior: a Theoretical and Experimental Investigation
Suppose that spreading enforcement resources uniformly across time and space allows sanctioning anyone who engages in an unwanted activity with probability p. However, by concentrating enforcement resources, it is possible to split the probability p into a higher probability of sanction pH > p in some targeted areas or times, at the expense of a lower probability of sanction pL < p elsewhere. If the objective is to minimize the overall level of the socially unwanted activity, irrespective of its specific location or time, does splitting the probability of sanction p help achieve this goal? We present a theoretical model of this situation, and undertake an experiment that allows us to answer this question empirically. Since the idea of beneficial splitting of prior beliefs is central to Bayesian persuasion literature, our investigation presents an experimental investigation into whether Bayesian persuasion can indeed yield practical benefits in a realistic parametrized setting.
Experts & Experiments
We develop a two-period model of decision making under uncertainty. The key novelty is that the decision maker can both consult an expert for advice and experiment, learning from his experience. We characterize a family of equilibria in which expert advice and experimentation coexist on the equilibrium path. We show the decision maker's ability to experiment shapes the advice he receives from the expert and, in turn, that the expert's advice shapes the experiments the decision maker undertakes. In equilibrium, expert advice and experimentation are complements. The more precisely the expert communicates, the greater the decision maker's incentive to experiment. However, there exists an upper bound on the quality of advice that the expert can provide in equilibrium, and this bound is lower than when the decision maker cannot experiment. The ability to experiment empowers the decision maker but, in so doing, makes communication with the expert more difficult, so much so that both players can be left worse off.
Learning from Viral Content
We study learning on social media with an equilibrium model of users interacting with shared news stories. Rational users arrive sequentially, observe an original story (i.e., a private signal) and a sample of predecessors’ stories in a news feed, and then decide which stories to share. The observed sample of stories depends on what predecessors share as well as the sampling algorithm generating news feeds. We focus on how often this algorithm selects more viral (i.e., widely shared) stories. Showing users viral stories can increase information aggregation, but it can also generate steady states where most shared stories are wrong. These misleading steady states self-perpetuate, as users who observe wrong stories develop wrong beliefs, and thus rationally continue to share them. Finally, we describe several consequences for platform design and robustness.
The Worker-Job Surplus
The worker-job surplus—the sum of the worker’s and the employer’s net values of a match—is the object that drives decisions in most matching models of the labor market. We develop a theory-based method to determine which of the observable worker and job characteristics impact the worker-job surplus in the data. To do so, we exploit the mobility choices of employed workers. Our method further provides a test of the commonly used single-index assumption, according to which worker and job heterogeneity can each be summarized by scalar indices. We implement our method on US data and find that relatively few worker and job attributes are surplus-relevant. We reject the existence of a single-index representation of these relevant multi-dimensional worker and job attributes. Finally, we illustrate the practical usefulness of these results in a new approach to defining the economy’s labor submarkets, based on workers with different surplus-relevant skills climbing different job ladders.
Comparative statics with adjustment costs and the le Chatelier principle
We develop a theory of monotone comparative statics for models with adjustment costs. We show that comparative statics conclusions may be drawn under the usual ordinal complementarity assumptions on the objective function, assuming very little about costs: only a mild monotonicity condition is required. We use this insight to prove a general le Chatelier principle: under the ordinal complementarity assumptions, if short-run adjustment is subject to a monotone cost, then the long-run response to a shock is greater than the short-run response. We extend these results to a fully dynamic model of adjustment over time: the le Chatelier principle remains valid, and under slightly stronger assumptions, optimal adjustment follows a monotone path. We apply our results to models of saving, production, pricing, labor supply, and investment.
Multidimensional Screening with Rich Consumer Data
We study multi-good sales by a seller who has access to rich data about a buyer's valuations for the goods. Optimal mechanisms in such multidimensional screening problems are known to in general be complicated and not resemble mechanisms observed in practice. Thus, we instead analyze the optimal convergence rate of the seller's revenue to the first-best revenue as the amount of data grows large. Our main result provides a rationale for a simple and widely used class of mechanisms---(pure) bundling---by showing that these mechanisms allow the seller to achieve the optimal convergence rate. In contrast, we find that another simple class of mechanisms---separate sales---yields a suboptimal convergence rate to the first-best and thus is outperformed by bundling whenever the seller has sufficiently precise information about consumers.
Sequential Mechanisms for Evidence Acquisition
We consider optimal mechanisms for inducing agents to acquire costly evidence in a setting where a principal has a good to allocate that all agents want. We show that optimal mechanisms are necessarily sequential in nature and have a threshold structure. Agents with higher costs of obtaining evidence and/or worse distributions of value for the principal are asked for evidence later, if at all. We derive these results in part by exploiting the relationship between the Lagrangian for this problem and the classic Weitzman (1979) “Pandora’s box” problem.
Design on Matroids: Diversity vs. Meritocracy
We provide optimal solutions to an institution that has dual goals of diversity and meritocracy when choosing from a set of applications. For example, in college admissions, administrators may want to admit a diverse class in addition to choosing students with the highest qualifications. We provide a class of choice rules that maximize merit subject to attaining a diversity level. Using this class, we find all subsets of applications on the diversity-merit Pareto frontier. In addition, we provide two novel characterizations of matroids.
Mediated Renegotiation
We develop a new approach to contract renegotiation under informational frictions. Specifically, we consider mediated mechanisms which cannot be contingent on any subsequent offer, but can generate a new source of asymmetric information between the contracting parties. Taking as a reference the canonical framework of Fudenberg and Tirole (1990), we show that, if mediated mechanisms are allowed, the corresponding renegotiation game admits only one equilibrium allocation, which coincides with the second-best one. Thus, the inefficiencies typically associated to the threat of renegotiation may be completely offset by the design of more sophisticated trading mechanisms.
Summer Term 2024
Screening with flexible investments
I study the problem of a principal to design a screening mechanism when the agent can invest in her preference type ex ante. The agent can flexibly choose any distribution of types at a cost. I characterize the set of implementable distributions and derive optimal mechanisms for specific cost structures. When cost are linear, an optimal mechanism is first-best and induces first-best investment, and the principal extracts the first-best surplus. When costs display decreasing risk or are mean-based, an optimal mechanism induces inefficiently low investment. When costs display increasing risk and, in addition, are moment-based, optimal distributions have finite support, and are generally not first order dominated by first-best distributions.
From Design to Disclosure
This paper studies voluntary disclosure in general sender–receiver games in which the sender has rich evidence that she can disclose to a receiver who then can set an allocation and transfers. This general framework encompasses monopoly pricing, bargaining over policies, and insurance markets. In this setting, we characterize the full set of equilibrium payoffs. Our main result establishes that any payoff profile that can be achieved through information design can also be sustained as an equilibrium of the disclosure game. Hence, in the contracting environments that we study, our analysis offers a microfoundation for information design and suggests that the gap between information design and disclosure is negligible.
I'll Tell You Tomorrow: Committing to Future Commitments
A principal wishes to promote the agent only if the state is good, and gradually receives private information about the state. The agent wants promotion, but would rather leave than stay and fail promotion. The principal optimally induces the agent to stay by committing to commit, that is, by committing today to tell the agent tomorrow about his chances of promotion the day after. The principal may promote the agent with some probability regardless of her information. Our results apply to worker retention, relationship-specific investment, and forward guidance.
Pigou Meets Wolinsky: Search, Price Discrimination, and Consumer Sophistication
We study the competitive effects of personalized pricing in horizontally differentiated markets with search frictions. We integrate the possibility of first degree price discrimination into the classic Wolinsky (1986) framework of consumer search. If all consumers are rational, personalized pricing leads to higher consumer surplus if and only if there are no search frictions. If all consumers are unaware that firms price discriminate, i.e. are naive as in Eyster and Rabin (2005), this result is reversed: Personalized pricing improves consumer surplus unless search costs are prohibitive.
Horizontal Partial Cross Ownership and Innovation
We study the effects of partial cross ownership (PCO) among rival firms on their incentives to innovate. PCO in our model gives rise to a price effect due to its effect on price competition and hence on the marginal benefit from investment, as well as a cannibalization effect which arises because each firm internalizes part of the negative externality of its investment on the rival's profit. We show that overall, PCO may benefit or harm consumers depending on the size of the PCO stakes, their degree of symmetry, the size of the innovation, its marginal cost, and whether it is drastic or not.
(Almost) Full Surplus Extraction with Endogenous Learning
We study selling of an indivisible good. The buyers’ valuations are correlated and unknown a priori. Each buyer can learn privately and flexibly about their own valuation at a cost. Neither how she learns nor what she learns is contractible, hence the seller faces hidden action and hidden information. The first best outcome is generally not incentive compatible. We show that, if the seller can commit ex ante to a selling mechanism which privately recommends an information structure to each buyer and elicits reports from all buyers after they have learned, she can obtain a payoff arbitrarily close to her first best payoff.
Disinformation in the Wald Model
In the classical sequential sampling model of Wald (1945), a decision maker (Alice) learns a binary state from a noisy signal. We study the effects of disinformation by introducing an adversary (Bob) who can pay a cost to distort the signal. Both players are Bayesian, ex-ante symmetrically informed, and share a common prior about the state. Alice wants to choose an action that matches the state, while Bob prefers her to choose a high action regardless of the state. We show that disinformation invariably reduces Alice's welfare and decision accuracy. Although Bob has an incentive to engage in distortion, it may backfire on him in equilibrium. We also analyze how the distribution of Bob's distortion cost affects the equilibrium strategies and outcomes of both players. The basis for our results are novel insights into the classic sequential sampling problem with more than two states.
The Dynamics of Robust Experimentation
Archive
Mechanism Design with Restricted Communication
We consider a Sender–Receiver environment where the sender is informed of states and the receiver chooses actions. There is a communication channel between them consisting of sets of input/output messages and a fixed transition probability. The sender reaches out to the receiver through the channel which limits communication in two ways: the number of available messages might be small, messages might be noisy. We consider a mechanism design setup whereby the receiver commits to a mechanism which selects distribution of actions and possibly monetary transfers, contingent on output messages. We aim at characterizing the joint distributions which can be implemented by communication over the channel, given the incentives of the sender. We consider both one-shot problems and series of i.i.d. problems. In particular, we show that when the sender and the receiver are engaged in a series of problems, linking decisions together is a more efficient instrument than monetary transfers.
Information Acquisition in Matching Markets: The Role of Price Discovery
We explore the acquisition and flow of information in matching markets through a model of college admissions with endogenous costly information acquisition. We extend the notion of stability to this partial information setting, and introduce regret-free stability as a refinement that additionally requires optimal student information acquisition. We show regret-free stable outcomes exist, and finding them is equivalent to finding appropriately-defined market-clearing cutoffs. To understand information flows, we recast matching mechanisms as price-discovery processes. No mechanism guarantees a regret-free stable outcome, because information deadlocks imply some students must acquire information sub optimally. Our analysis suggests approaches for facilitating efficient price discovery, by leveraging historical information or market sub-samples to estimate cutoffs. We show that mechanisms that use such methods to advise applicants on their admission chances yield approximately regret-free stable outcomes. A survey of university admission systems highlights the practical importance of providing applicants with information about their admission chances.
Unpaired Kidney Exchange: Overcoming Double Coincidence of Wants without Money
For an incompatible patient-donor pair, kidney exchanges often forbid receipt-before-donation (the patient receives a kidney before the donor donates) and donation-before-receipt, causing a double-coincidence-of-wants problem. We study an algorithm, the Unpaired kidney exchange algorithm, which eliminates this problem. In a dynamic matching model, we show that waiting time of patients under the Unpaired is close to optimal and substantially shorter than widely used algorithms. Using a rich administrative dataset from France, we show that Unpaired achieves a match rate of 63 percent and an average waiting time of 176 days for transplanted patients. The (infeasible) optimal algorithm is only slightly better (64 percent and 144 days); widely used algorithms deliver less than 40 percent and at least 232 days. We discuss a range of solutions that can address the potential practical incentive challenges of the Unpaired. In particular, we extend our analysis to an environment where a deceased donor waitlist can be integrated to improve the performance of algorithms. We show that our theoretical and empirical comparisons continue to hold. Finally, based on these analyses, we propose a practical version of the Unpaired algorithm.
Informationally Robust Cheap-Talk
We study the robustness of cheap-talk equilibria to infinitesimal private information of the receiver in a model with a binary state-space and state-independent sender-preferences. We show that the sender-optimal equilibrium is robust if and only if this equilibrium either reveals no information to the receiver or fully reveals one of the states with positive probability. We then characterize the actions that can be played with positive probability in any robust equilibrium. Finally, we fully characterize the optimal sender-utility under binary receiver’s private information, and provide bounds for the optimal sender-utility under general private information.
Should the Timing of Inspections be Predictable?
A principal hires an agent to work on a long-term project that culminates in a breakthrough or a breakdown. At each time, the agent privately chooses to work or shirk. Working increases the arrival rate of breakthroughs and decreases the arrival rate of breakdowns. To motivate the agent to work, the principal conducts costly inspections. She fires the agent if shirking is detected. We characterize the principal’s optimal inspection policy. Periodic inspections are optimal if work primarily speeds up breakthroughs. Random inspections are optimal if work primarily delays breakdowns. Crucially, the agent’s actions determine his risk attitude over the timing of punishments.
Early-Career Discrimination: Spiraling or Self-Correcting?
Do workers from social groups with comparable productivity distributions obtain comparable lifetime earnings? We study how a small amount of early-career discrimination propagates over time when workers’ productivity is revealed through employment. In breakdown learning environments that track primarily on-the-job failures, such discrimination spirals into a substantial lifetime earnings gap for groups of comparable productivity, whereas in breakthrough learning environments that track successes, early discrimination self-corrects so as to guarantee comparable lifetime earnings. This contrast is robust to large labor markets, flexible wages, inconclusive learning, investment in productivity, and misspecified employers’ beliefs.
Many economic institutions and organizational practices make early success have a persistent effect on final outcomes. By granting additional resources, favorable treatment, or other forms of bias to early strong performers, they raise the likelihood with which these early strong performers become final winners. When performance is informative about ability differentials, such bias can serve as a tool to increase “selective efficiency”, i.e. the allocation of resources or decision-making authority to the most talented. However, in situations where noise swamps ability differences in determining relative performance, the use of bias would have the sole effect of making luck persistent. Such an outcome would seem at odds with the meritocratic principle of requiring differences in economic outcomes to be attributable to ability or effort differentials. In this paper, we challenge this view by showing that even as noise swamps ability differences in driving performance, maximization of selective efficiency continues to require bias favoring early leaders. Moreover, inducing greater persistence of outcomes in noisier environments can be consistent with the objective of assigning resources to the most able.
Equilibrium Selection in Repeated Games with Patient Players
What determines the path of play in an infinitely repeated game? Typically the players’ interests are not perfectly aligned but there is scope for cooperation. Potential surplus could be shared in different ways. The folk theorems of repeated games provide no guidance about the outcome. In the more tractable setting where players can sign binding contracts after any history of play, Abreu and Pearce (2007) show that slight reputational perturbations of the game lead to predictions consistent with Nash bargaining with threats (Nash, 1953). In many settings of interest, such contracts are not available. Nonetheless, combining reputational perturbation with modest continuity and renegotiation conditions in two-person repeated games with patient players again isolates play that is consistent with Nash bargaining with threats.
Organizational Change and Reference-Dependent Preferences
Reference-dependent preferences can explain several puzzling observations on organizational change. Loss aversion clarifies why change is often slow or stagnant for long periods followed by a sudden boost in productivity during a crisis. Moreover, it accounts for the fact that different firms in the same industry can have significant productivity differences. The model also demonstrates the importance of expectation management even if all parties have rational expectations. Social preferences explain why it may be optimal to split up a firm in two different entities.
Reputation for a Degree of Honesty
Can reputation replace legal commitment for an institution making periodic public announcements? Near the limiting case of ideal patience, results of Fudenberg and Levine (1992) imply a positive answer in value terms, in the presence of a rich set of behavioral types. Little is known about equilibrium behavior in such reputational equilibria. Computational and analytic approaches are combined here to provide a detailed look at how reputations are managed. Behavior depends upon which of three reputational regions pertains after a history of play. These characterizations hold even far from the patient limit. Near the limit, a novel method of calculating present discounted values, stationary promisingkeeping, helps establish a close connection between the reliability of the institution’s reports and the Kamenica and Gentzkow (2011) commitment benchmark. It is striking that this connection still holds when the benchmark type is not available (in the set of behavioral types) to be imitated.
Two Approaches to Iterated Reasoning in Games
Level-k analysis and epistemic game theory are two different ways of investigating iterative reasoning in games. This paper explores the relationship between these two approaches. An important difference between them is that level-k analysis begins with an exogenous anchor on the players’ beliefs, while epistemic analysis begins with arbitrary epistemic types (hierarchies of beliefs). To close the gap, we develop the concept of a level-k epistemic type structure, that incorporates the exogenous anchor. We also define a complete level-k type structure where the exogenous anchor is the only restriction on hierarchies of beliefs. One might conjecture that, in a complete structure, the strategies that can be played under rationality and (m − 1)th-order belief of rationality are precisely those strategies played by a level-k player, for any k ≥ m. In fact, we prove that the strategies that can be played are the m-rationalizable strategies (i.e., the strategies that survive m rounds of elimination of strongly dominated strategies). This surprising result says that level-k analysis and epistemic game theory are two genuinely different approaches, with different implications for inferring the players’ reasoning about rationality from their observed behavior.
Posterior-Mean Separable Costs of Information Acquisition
We analyze a problem of revealed preference given state-dependent stochastic choice data in which the payoff to a decision maker (DM) only depends on their beliefs about posterior means. Often, the DM must also learn about or pay attention to the state; in applied work on this subject, it is also often assumed that the costs of such learning are linearly dependent in the distribution over posterior means. We provide testable conditions to identify whether this assumption holds. This allows for the use of information design techniques to solve the DM's problem.
The Costly Wisdom of Inattentive Crowds
Incentivizing the acquisition and aggregation of information is a key task of the modern economy (e.g., financial markets). We study the design of optimal mechanisms for this task. A population of rationally inattentive (RI) agents can flexibly learn about a common state of nature, subject to uniformly posterior separable (UPS) information costs. A principal, who aims to procure a given information structure from the agents at minimal cost, can design general dynamic mechanisms with report- and state-contingent payments. If the agents are risk-neutral, prediction markets implement the first-best. If the agents are risk-averse, no mechanism can approximate the first-best cost—not even those that harness the “wisdom of the crowd” by employing a large number of “informationally small” agents. This inefficiency derives from the combination of agents’ moral hazard and adverse selection. Our characterization of incentive compatibility, which exploits an equivalence between proper scoring rules and UPS information costs, is tractable and portable to other design settings with RI agents (e.g., principal-expert and screening problems).
Optimal Security Design for Risk-Averse Investors
We use the tools of mechanism design, combined with the theory of risk measures, to analyze a model where a cash constrained owner of an asset with stochastic returns raises capital from a population of investors that differ in their risk aversion and budget constraints. The distribution of the asset's cash flow is assumed here to be common-knowledge: no agent has private information about it. The issuer partitions and sells the asset's realized cash flow into several asset-backed securities, one for each type of investor. The optimal partition conforms to the commonly observed practice of tranching (e.g., senior debt, junior debt and equity) where senior claims are paid before the subordinate ones. The holders of more senior/junior tranches are determined by the relative risk appetites of the different types of investors and of the issuer, with the more risk averse agents holding the more senior tranches. Tranching endogenously arises here in an optimal mechanism because of simple economic forces: the differences in risk appetites among agents, and in the budget constraints they face.
Dynamic Contracting with Flexible Monitoring
We study a principal's joint design of optimal monitoring and com- pensation schemes to incentivize an agent by incorporating information design into a dynamic contracting framework. The principal can flexibly allocate her limited monitoring capacity between seeking evidence that confirms or contradicts the agent's e¤ort, as the basis for reward or punishment. When the agent's continuation value is low, the principal seeks only confirmatory evidence. When it exceeds a threshold, the principal seeks mainly contradictory evidence. Importantly, the agent's effort is perpetuated if and only if he is sufficiently productive.
A Measure of Behavioral Heterogeneity
In this paper we propose a novel way to measure behavioral heterogeneity in a population of stochastic individuals. Our measure is choice-based; it evaluates the probability that, over a randomly selected menu, the sampled choices of two sampled individuals differ. We provide axiomatic foundations for this measure and a decomposition result that separates heterogeneity into its intra- and inter-personal
components.
A mechanism-design approach to property rights
We propose a framework for studying the optimal design of rights relating to the control of an economic resource - which we broadly refer to as property rights. An agent makes an investment decision, affecting her valuation for the resource, and then participates in a trading mechanism chosen by a principal in a sequentially rational fashion, leading to a hold-up problem. A designer - who would like to incentivize efficient investment and whose preferences may differ from those of the principal - can endow the agent with a menu of rights that determine the agent's set of outside options in the interaction with the principal. We characterize the optimal rights as a function of the designer's and the principal's objectives, and the investment technology. We find that optimal rights typically differ from a classical property right giving the agent full control over the resource. In particular, we show that the optimal menu requires at most two types of rights, including an option-to-own, which grants the agent control over the resource upon paying a pre-specified price.
Time Trumps Quantity in the Market for Lemons
We consider a dynamic adverse selection model where privately informed sellers of divisible assets can choose how much of their asset to sell at each point in time to competitive buyers. With commitment, delay and lower quantities are equivalent ways to signal higher quality. Only the discounted quantity traded is pinned down in equilibrium. With spot contracts and observable past trades, there is a unique and fully separating path of trades in equilibrium. Irrespective of the horizon and the frequency of trades, the same welfare is attained by each seller type as in the commitment case. When trades can take place continuously over time, each type trades all of its assets at a unique point in time. Thus, only delay is used to signal higher quality. When past trades are not observable, the equilibrium only coincides with the one with public histories when trading can take place continuously over time.
How Competition Shapes Information in Auctions
We consider auctions where buyers can acquire costly information about their valuations and those of others, and investigate how competition between buyers shapes their learning incentives. In equilibrium, buyers find it cost-efficient to acquire some information about their competitors so as to only learn their valuations when they have a fair chance of winning. We show that such learning incentives make competition between buyers less effective: losing buyers often fail to learn their valuations precisely and, as a result, compete less aggressively for the good. This depresses revenue, which remains bounded away from what the standard model with exogenous information predicts, even when information costs are negligible. Finally, we examine the implications for auction design. First, setting an optimal reserve price is more valuable than attracting an extra buyer, which contrasts with the seminal result of Bulow and Klemperer (1996). Second, the seller can incentivize buyers to learn their valuations, hence restoring effective competition, by maintaining uncertainty over the set of auction participants.
Optimal testing in disclosure games
We extend the standard disclosure model between a sender and a receiver by allowing the receiver to gather partial information. The receiver can choose any signal with at most k realizations, which we call a test. Since the test choice is observed by the sender, it influences the sender’s disclosure incentives. We characterize the optimal test for the receiver and show how it resolves the trade-off between the informativeness of the test and disclosure incentives. If the receiver would aim at maximizing the informativeness, she would choose a deterministic test. In contrast, the optimal test involves randomization over signal realizations and maintains a simple structure. This structure allows us to interpret this randomization as the strategic use of uncertain evaluation standards for disclosure incentives.
(Un-)Common Preferences, Ambiguity, and Coordination
We study the ‘common prior’ assumption and its implications when agents have differential information and preferences beyond subjective expected utility (SEU). We consider consequentialist interim preferences that are consistent with respect to the same ex-ante evaluation and characterize the latter in terms of extreme limits of higher-order expectations. Notably, agents are mutually dynamic consistent with respect to the same ex-ante evaluation if and only if all the limits of higher-order expectations are the same, extending beyond SEU the classical characterization of the common prior assumption in Samet. Within this framework, we characterize the properties of equilibrium prices in financial beauty contests (and other coordination games) in terms of the agents’ private information, coordination motives, and attitudes toward uncertainty. Differently from the SEU case, the limit price does not coincide in general with the common ex-ante expectation. Moreover, when the agents share the same benchmark probabilistic model, high-coordination motives make their concern for misspecification disappear in equilibrium, exposing them to a divergence between the market price and the fundamental value of the security.
Stationary social learning in a changing environment
We consider social learning in a changing world. With changing states, societies can be responsive only if agents regularly act upon fresh information, which significantly limits the value of observational learning. When the state is close to persistent, a consensus whereby most agents choose the same action typically emerges. However, the consensus action is not perfectly correlated with the state, because societies exhibit inertia following state changes. Phases of inertia may be longer when signals are more precise, even if agents drawn large samples of past actions, as actions then become too correlated within samples, thereby reducing informativeness and welfare.Auctions vs. Negotiations: The Role of the Payment Structure
We investigate a seller’s strategic choice between negotiating with fewer bidders and running an auction with additional bidders, allowing for general security payments. The key factor favoring negotiations is the seller’s rent-extraction benefit of setting her preferred payment structure; reserve prices are of secondary importance. Negotiations are more valuable if the seller’s asset creates more value at more productive bidders – in which case sellers prefer contingent payments while bidders prefer cash – and if the dispersion and magnitude of bidders’ private valuations are higher. Our results have implications for mergers and acquisitions, patent licensing, and compensation negotiations in tight labor markets.
Informing agents amidst biased narratives
I study the strategic interaction between a benevolent sender (who provides data) and a biased narrator (who interprets data) who compete to persuade a boundedly rational receiver (who takes action). The receiver does not know the data-generating model. She chooses between models provided by the sender and the narrator using the maximum likelihood principle, selecting the one that best fits the data given her prior belief. The sender faces a trade-off between providing precise information and minimizing misinterpretation. Surprisingly, full disclosure can be suboptimal and even backfire. I identify a finite set of models that contain the optimal data-generating model, which maximizes the receiver’s expected utility. The sender can guarantee non-negative value of information, preventing harm from misinterpretation. I apply this framework to information campaigns and employee feedback.
Putting Context into Preference Aggregation
The axioms underlying Arrow's impossibility theorem are very restrictive in terms of what can be used when aggregating preferences. Social preferences may not depend on the menu nor on preferences over alternatives outside the menu. But context matters. So we weaken these restrictions to allow for context to be included. The context as we define describes which alternatives in the menu and which preferences over alternatives outside the menu matter. We obtain unique representations. These are discussed in examples involving markets, bargaining and intertemporal well-being of an individual.
Coordination in Complex Environments
I introduce a framework to study coordination in highly uncertain environments. Coordination is an important aspect of innovative contexts, where: the more innovative a course of action, the more uncertain its outcome. To explore the interplay of coordination and informational complexity, this paper embeds a beauty-contest game into a complex environment. I uncover a new conformity phenomenon. The new effect may push towards exploration of unknown alternatives, or constitute a status quo bias, depending on the network structure of the connections among players. In an application to oligopoly pricing, an increase in complexity results in a higher level of conformity in pricing policies. I study the new coordination problems introduced by complexity and propose an equilibrium selection rule. In an application to multi-division organizations, sufficiently high complexity "implements" the same profits as centralized decision-making. I also study heterogeneity across players in the mapping from decisions to outcomes, and private information about a status quo.
Motivated Misspecification
I propose a model of expectation management to study how an interactive environment breeds and perpetuates a certain type of misperception. This paper provides a novel approach to incentivize effort (perception manipulation), complementary to the usual monetary or informational incentives studied in principal-agent theory. It endogenizes model misspecification in the literature of misspecified learning in a principal-agent framework and can be applied to a wide range of interactions such as mentor-mentee, parent-child, self-manipulation, and emotional abuse in professional or intimate relationships.
tba
Archive
Efficient Mechanisms under Unawareness
We study the design of mechanisms under asymmetric awareness and asymmetric information. Unawareness refers to the lack of conception rather than the lack of information. With limited awareness, an agent's message space is type-dependent because an agent cannot misrepresent herself as a type that she is unaware of. Nevertheless, we show that the revelation principle holds.
The revelation principle is of limited use though because a mechanism designer is hardly able to commit to outcomes for type profiles of which he is unaware. Yet, the mechanism designer can at least commit to properties of social choice functions like efficiency given ex post awareness. Assuming quasi-linear utilities, private values, and welfare isotonicity in awareness, we show that if a social choice function is utilitarian ex post efficient, then it is implementable under pooled agents’ awareness in conditional dominant strategies. That is, it is possible to reveal all asymmetric awareness among agents and implement the welfare maximizing social choice function in conditional dominant strategies without the need of the social planner being fully
aware ex ante. To this end, we develop dynamic versions of the Groves and Clarke mechanisms along which true types are revealed and subsequently elaborated at endogenous higher awareness levels. We explore how asymmetric awareness affects budget balance and participation constraints.
Simultaneous bidding in second-price auctions
In this paper, we analyze a model of competing sealed-bid second-price auctions where bidders have unit demand and can bid on multiple auctions simultaneously. We show that there is no symmetric pure equilibrium with strictly increasing strategies, unlike in standard auction games. However, a symmetric mixed-strategy equilibrium exists, where all bidders will bid on all available auctions with probability one. This holds true for any mixed equilibrium. We then focus on two specific scenarios: one with two auctions and three bidders, and the other with two auctions and two bidders. For the case of three bidders, we identify a pure equilibrium. In contrast, for the case of two bidders, we find a continuum of mixed equilibria.
Informing to Divert Attention
We study a multidimensional Sender-Receiver game in which Receiver can acquire limited information after observing the Sender's signal. Depending on the parameters describing the conflict of interest between Sender and Receiver, we characterise optimal information disclosure and the information acquired by Receiver as a response. We show that in case of partial conflict of interests (aligned on some dimensions and misaligned on others) Sender uses multidimensionality of the environment to divert Receiver's attention away from the dimensions of misalignment of interests. Moreover, there is negative value of information in the sense that Receiver would be better off if she could commit not to extract private information or to have access to information of lower quality. We present applications to informational lobbying and optimal bonus policies.
Incentives and Efficiency in Constrained Allocation Mechanisms
We study private-good allocation mechanisms where an arbitrary constraint delimits the set of feasible joint allocations. This generality provides a unified perspective over several prominent examples that can be parameterized as constraints in this model, including house allocation, roommate assignment, and social choice. We characterize the set of two-agent strategy- roof and Pareto efficient mechanisms, showing that every mechanism is a form of “local dictatorship.” For more agents, we show that an N-agent mechanism is group strategy-proof if and only if all its two-agent marginal mechanisms (defined by holding fixed all but two agents’ preferences) are individually strategy-proof and Pareto efficient, allowing us to leverage the two-agent characterization for more general problems. To illustrate their usefulness, we apply these results to the roommates problem to provide the first characterization of all group strategy-proof and Pareto efficient mechanisms, that turn out to be sequential dictatorships. Our results also yield a novel proof of the Gibbard–Satterthwaite Theorem. We finally introduce a new class of mechanisms, that we call “local priority” mechanisms, that exists for all constraints and subsumes many important classes of existing mechanisms.
Screening: A Unified Geometric Perspective
We investigate single-agent mechanism design with arbitrary restrictions on the agent’s vNM preferences over a finite set of outcomes. This covers many standard problems with or without transfers, including the (multi-good) monopolistic seller problem. We characterize incentive-compatible mechanisms through their associated delegation sets, convex bodies within the unit simplex. Every extreme point of the set of incentive-compatible mechanisms grants the agent a veto, allowing them to choose, for any outcome, a lottery that excludes it. Determining whether a veto mechanism is an extreme point corresponds to solving the indecomposability problem for convex bodies as introduced by Gale (1954). In one-dimensional type spaces, we find that the principal’s ex-ante expected utility is maximized by offering a menu with at most three options. However, for multi-dimensional type spaces, no such simplification exists: the set of (exposed) extreme points is dense in the set of veto-granting mechanisms. We apply these insights to derive known and novel results about the monopolistic seller problem.
A Robust Characterization of Nash Equilibrium
We give a robust characterization of Nash equilibrium by postulating coherent behavior across varying games. Nash equilibrium is the only solution concept that satisfies consequentialism, consistency, and rationality. It follows that every equilibrium refinement violates at least one of these properties. We moreover show that every solution concept that approximately satisfies consequentialism, consistency, and rationality returns approximate Nash equilibria. The latter approximation can be made arbitrarily good by increasing the approximation of the axioms. This result extends to various natural subclasses of games such as two-player zero-sum games, potential games, and graphical games.
How to get advice from reputation concerned experts: A mechanism design approach
We examine how a decision maker (DM) should organize the communication with experts who are only concerned about improving their own reputation rather than helping her per se. Employing a mechanism design approach, we consider all possible ways how this communication could be organized. We characterize when the expert’s reputation concerns prevent the DM from learning the information necessary to make a first best choice. We show that when the first best is not achievable, then it is never optimal for the DM to meet with the experts privately. She obtains better results when she uses a communication protocol where the experts engage in a debate but the DM is left in the dark about the contribution of each expert towards the final recommendation.
Optimal testing in disclosure games
We study a disclosure game between an informed sender and a receiver where the receiver has the option to gather partial information through a test. We characterize the optimal binary test and show that the receiver sacrifices informativeness of the test to incentivize disclosure. Specifically, by pooling medium states with low states, the receiver induces disclosure of medium states and thus, in equilibrium, observes more information.
Adversarial forecasters, Suspense, and Randomization
An adversarial forecaster representation sums an expected utility function and a measure of surprise that depends on an adversary’s forecast. These representations are concave and satisfy a smoothness condition, and any concave preference relation that satisfies the smoothness condition has an adversarial forecaster representation. Because of concavity, the agent typically prefers to randomize. We characterize the support size of optimally chosen lotteries, and how it depends on preferences for surprise.
In centralized market mechanisms individuals may not fully observe other participants' type reports. Hence, the mechanism designer may deviate from the promised mechanism without the individuals being able to detect these deviations. In this paper, we develop a theory of auditability for allocation and social choice problems. Namely, we measure a mechanism's auditability by the smallest number of individuals that can jointly detect any deviation. Our theory reveals stark contrasts between prominent mechanisms' auditability properties in various applications. For priority-based allocation problems, we find that the Immediate Acceptance mechanism is maximally auditable, in a sense that any deviation can always be detected by just two individuals, whereas, on the other extreme, the Deferred Acceptance mechanism is minimally auditable, in a sense that some deviations may go undetected unless there is full information about everyone's reports. For a class of mechanisms that can be implemented as Deferred Acceptance in a systematically modified problems, we establish a relation between a mechanism's auditability and the uniqueness of stable outcomes in the modified problems. For the auctions setup, we show that the first-price and the all-pay auction mechanisms have an auditability index of two, whereas the second-price auction mechanism is minimally auditable. For voting problems with a binary outcome, we characterize the dictatorial rule as the unique voting mechanism with an auditability index of one, and we characterize the majority voting rule as the unique most auditable anonymous voting mechanism. Finally, for the choice with affirmative action setting, we compare the auditability indices of prominent reserves mechanisms. We establish that a particular reserves rule implementation has superior auditability properties.
Decentralized Many-to-One Matching with Bilateral Search
I analyze a finite decentralized many-to-one search model, where firms and workers meet randomly and time is nearly costless. In line with the existing literature, stable matchings of the many-to-one market can be enforced as search equilibria. However, in many-to-one search, firms collect workers in a cumulative manner. For this reason, unlike centralized matching markets, the collective structure of the firms affects the search process fundamentally. For instance, dynamically stable matchings may not be sustained as search equilibria because of the strategic usage of seats over time. Furthermore, although stability in many-to-one markets can be analyzed through their related one-to-one markets, the many-to-one search model is essentially different from its related one-to-one counterpart. One sufficient condition for the equilibria in many-to-one markets to coincide with the equilibria of the related one-to-one market is that firms have additively separable utility over workers.
Feed for good? On regulating social media platforms
Social media platforms govern the exchange of information between users by providing personalized feeds. This paper shows that the pursuit of engagement maximization, driven by monetary incentives, results in low-quality communication and the proliferation of echo chambers. A monopolistic platform disregards social learning and curates feeds that primarily consist of content from like-minded individuals. We study the consequences on learning and welfare resulting from transitioning to this algorithm from the previously employed chronological feed. We show that the platform could create value by using its privileged information to design algorithms that balance learning and engagement, maximizing users' welfare. However, incentivizing a monopolist to embrace such an approach presents challenges. To address this, we propose interoperability as a measure to overcome network effects in platform competition, level the playing field, and prompt platforms to adopt the socially optimal algorithm.
Equivalence of Strategy-proofness and Directed Local Strategy-proofness under Preference Extensions
Coarse Information Design
We study an information design problem with continuous state and discrete signal space. Under convex value functions, the optimal information structure is interval-partitional and exhibits a dual expectations property: each induced signal is the conditional mean (taken under the prior density) of each interval; each interval cutoff is the conditional mean (taken under the value function curvature) of the interval formed by neighbouring signals. This property enables examination into which part of the state space is more finely partitioned and facilitates comparative statics analysis. The analysis can be extended to general value functions and adapted to study coarse mechanism design.
Search Disclosure
Recent advances in online tracking technologies have enabled online firms to inform their rivals that a consumer has obtained an offer from them. We call the provision of this information search disclosure and integrate this into the Wolinsky (1986) model of sequential search. We show that firms only voluntarily conduct search disclosure if search costs are low or price revisions are infeasible. The information exchange that can emerge in equilibrium enables price discrimination that reduces consumer surplus and total welfare. By contrast, mandating firms to use search disclosure at all times can raise consumer surplus and total welfare.
Data Linkage between Markets: Does the Emergence of an Informed Insurer Cause Consumer Harm?
A merger of two companies active in seemingly unrelated markets creates data linkage: by operating in a product market, the merged company acquires an informational advantage in an insurance market where companies compete in menus of contracts. In the insurance market, the informed insurer earns rent through cream-skimming. Some of this rent is passed on to consumers in the product market. Overall, the data linkage makes consumers better off when the insurance market is competitive and, under some conditions, even when the insurance market is monopolistic. The data-sharing requirement and concerns of long-term monopolization are discussed.
Moral hazard and adverse selection under generalized distribution approach (brown bag)
We study the design of optimal contracts through which a risk-neutral principal motivates a risk-averse agent to produce outputs. This principal-agent problem is formulated under the generalized distribution approach, where the agent can choose an arbitrary distribution of the output, at a Kullback-Leibler divergence cost. We focus on the case where the agent has private information about the production environment and show that the optimal menu of contracts employs a standard ‘no distortion at the top’ property. Under further assumptions, the optimal menu features full screening.
Advocacy and cheap talk (brown bag)
We study advocacy in a model of information investigation and communication with the latter taking place via cheap talk. The question of interest is whether to assign the task of investigating (a piece of) unverifiable information, which is then communicated to a decision maker, to one or two investigators. Conceptually, this is related to Dewatripont and Tirole (1999) in the sense that investigators are a priori unbiased but can be endogenously turned into advocates. However, a key difference in our model is the role of information, which we treat as unverifiable and manipulable, so that communication takes the form of cheap talk. In contrast to Dewatripont and Tirole (1999), we find that assigning one investigator is weakly preferred to two investigators by the decision maker. Applications are the comparison of legal systems or centralized versus decentralized information investigation in multi-divisional organizations.
Eliciting information from multiple experts via grouping
A decision maker (DM) seeks to determine whether to adopt a new policy or maintain the status quo. To do so, she consults (finitely many) experts whose common interests differ significantly from those of the DM. As suggested by Wolinsky (2002), partial communication ("grouping mechanisms") among experts can – neither requiring transfers nor commitment – result in revelation of more information than full communication: by allowing for communication within groups of experts only and, hence, changing the events in which votes are pivotal, the DM may be able to manipulate experts' strategies to her advantage. We elaborate on this, characterising optimal group mechanisms and conditions under which grouping can improve upon full communication.
Feed for good? On the effects of personalization algorithms in social platforms
In this paper, a social media platform governs the exchange of information among users with preferences for sincerity and conformity by providing personalized feeds. We show that the pursuit of engagement maximization results in the proliferation of echo chambers. A monopolistic platform implements an algorithm that disregards social learning and provides feeds that primarily consist of content from like-minded individuals. We study the consequences on learning and welfare resulting from transitioning to this algorithm from the previously employed chronological feed. While users' experience improves under the platform's optimal algorithm, social learning is worsened. Indeed, learning vanishes in large populations. However, the platform could create value by using its privileged information to design an algorithm that balances learning and engagement, maximizing users' welfare. We discuss interoperability as a possible regulatory solution that would eliminate entry barriers in platform competition caused by network effects, thereby inducing competing platforms to adopt the socially optimal algorithm.
Decentralized Many-to-One Matching with Random Search
I analyze a canonical many-to-one matching market within a decentralized search model with frictions, where a finite number of firms and workers meet randomly until the market clears. I compare the stable matchings of the underlying market and equilibrium outcomes when time is nearly costless. In contrast to the case where each firm has just a single vacancy, I show that stable matchings are not obtained as easily. In particular, there may be no Markovian equilibrium that uniformly implements either the worker- or the firm-optimal stable matching in every subgame. The challenge results from the firms' ability to withhold capacity strategically. Yet, this is not the case for markets with vertical preferences on one side, and I construct the equilibrium strategy profile that leads to the unique stable matching almost surely. Moreover, multiple vacancies enable firms to implicitly collude and achieve unstable but firm-preferred matchings, even under Markovian equilibria. Finally, I identify one sufficient condition on preferences to rule out such opportunities.
Recruitment and Information Provision in Auctions with Learning
In auctions, I explore the interaction between buyers' flexible information acquisition and the seller’s incentives for recruitment and information provision. Contrary to the literature on entry costs, I find that limiting participation is never optimal. The seller’s incentive for information provision is extremal. Different recruitment costs induce distinct auction settings: high recruitment costs deter an active auction; intermediate costs lead to a two-buyer auction without information provision and potential obfuscation; low costs induce an auction with many participants and maximal information provision.
Multidimensional Learning with Misspecified Interactions
We investigate long-term learning outcomes in an exogenous learning environment with multidimensional states and signals under misspecification. We provide a convergence result and general properties of limit beliefs. Focusing on assessing the value of additional information, we find that there is no universally beneficial source: For every possible structure, there exists a scenario where incorporating the information results in long-term beliefs that are worse for the agent. Understanding the true signal structure does not necessarily help in determining which structures are beneficial in a concrete situation, but understanding the agent's (mis-)perception can do so.
Robust Equilibria in Generic Extensive-form Games
We prove the 2-player, generic extensive-form case of the conjecture of Govindan and Wilson (1997a,b) and Hauk and Hurkens (2002) stating that an equilibrium component is essential in every equivalent game if and only if the index of the component is nonzero. This provides an index-theoretic characterization of the concept of hyperstable components of equilibria in generic extensive-form games, first formulated by Kohlberg and Mertens (1986).
Compound Lotteries without Compound Independence
Compound lotteries are useful modelling tools for information preferences, ambiguity, and dynamic choice. A key assumption in dealing with preferences over such objects in the past has been ‘Compound Independence’, a weakened version of Independence which surprisingly led back to expected utility, even under weaker assumptions. I present two representation theorems that do away with Compound Independence and offer new recursive utility functions that represent wider preference over multi-stage lotteries. I characterize risk and information attitudes for such preferences and offer an application to investor behavior which rationalizes changing information preferences and myopic loss aversion.
A Model of Decision Confidence Formation
We study informational dissociations between decisions and decision confidence. We explore the consequences of a dual-system model: the decision system and confidence system have distinct goals, but share access to a source of noisy and costly information about a decision-relevant variable. The decision system aims to maximize utility while the confidence system monitors the decision system and aims to provide good feedback about the correctness of the decision. In line with existing experimental evidence showing the importance of post-decisional information in confidence formation, we allow the confidence system to accumulate information after the decision. We aim to provide a statistical foundation for the post-decisional stage (used in descriptive models of confidence). However, we find that it is not always optimal to engage in the second stage, even for a given individual in a given decision environment. In particular, there is scope for post-decisional information acquisition only for relatively fast decisions. Hence, a strict distinction between one-stage and two-stage theories of decision confidence may be misleading because both may manifest themselves under one underlying mechanism in a non-trivial manner.
Product Differentiation with Partially Informed Consumers
We investigate a Hotelling model of spatial competition, featuring two firms and a continuum of consumers with finite reservation prices. Consumers are facing uncertainty about their locations, but obtain a costless signal provided by an information designer. Firms first select locations and subsequently set prices. Our focus is on identifying optimal signal structures that maximize either total surplus or consumer surplus. We find that the signal structures necessary to achieve these objectives highly depend on the reservation prices.
Screening Knowledge
A principal (she) tests an agent’s (he) knowledge of a subject matter. She has preferences over his unobserved quality, which is correlated with his knowledge. Modeling the subject matter as an unknown state and knowledge as beliefs over it, I show that optimal tests are simple: They take the form of True-False, weighted True-False or True-False-Unsure, regardless of the principal’s preferences, the distribution of the agent’s beliefs, its correlation with his quality or his knowledge thereof. The need to elicit knowledge forces the principal to trade-off the efficacy of the test in terms of whom it rewards, against how much it rewards them. If there is an ex-ante “obvious” answer, the optimal resolution of this trade-off leads to a partial penalty for that answer, even if it is correct, or a partial reward for a “counterintuitive” answer, even if it is incorrect. When the principal can pick the subject matter, she picks one that admits no such ex-ante obvious answer. In this case, the highly prevalent True-False test is always optimal, regardless of the principal's preferences, agent’s learning, or the specific optimal choice of the subject matter.