Skip to main content
European Commission logo
Deutsch Deutsch
CORDIS - Forschungsergebnisse der EU
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary
Inhalt archiviert am 2024-06-18

Reference Dependence and Labor-Market Fluctuations

Final Report Summary - RDLMF (Reference Dependence and Labor-Market Fluctuations)

The EU funded project, “Reference Dependence and Labor-Market Fluctuations” shows how insights from behavioral economics can be fruitfully applied to a central issue in business-cycle theory, and more broadly, to enhance our understanding of market interaction.
Economists have long pondered over the observation that wages display downward rigidity and do not fall in recessions as much as one might expect on the basis of supply-and-demand analysis. An idea with a long pedigree, going back to Keynes in the 1930’s, is that fairness considerations deter employers from cutting wages during recessions. Specifically, the theory is that the labor contract's inherent incompleteness forces employers to rely to some extent on workers' intrinsic motivation. When workers feel that they are treated unfairly, their intrinsic motivation is damaged and their output declines. According to this "morale hazard theory", wage cuts relative to a "reference point" have such an effect, and for this reason employers avoid them.
One of the main results in my funded project (summarized in the paper “Reference-Dependent Preferences and Labor Market Fluctuations”, published in the NBER Macroeconomic Annual) proposes a theoretical framework of this morale hazard theory by integrating three separate literatures that are at the forefront of economic research today: macroeconomic models of search and matching, behavioral models of social preferences and the recent behavioral models of time inconsistent, reference-dependent preferences. Instead of conducting the traditional macroeconomic competitive equilibrium analysis, we follow the modern approach of non-cooperative game theory and show that the unique (subgame-perfect) equilibrium of our model exhibits the following properties: existing workers experience downward wage rigidity, as well as destruction of output following negative shocks due to layoffs or loss of morale; newly hired workers earn relatively flexible wages, but not as much as in the benchmark without reference dependence; market tightness is more volatile than under this benchmark. Our framework is therefore able to shed light on a puzzle that has attracted much attention in the macroeconomic literature (“the Shimer puzzle”): how can we explain large fluctuations in unemployment alongside downward rigidity in wages?
The project goes beyond reference-dependence and the labor market to explore further behavioral frictions in other markets. In today's world, vast amounts of information are potentially accessible at little to no pecuniary cost. However, consumers typically have only a limited ability to process all this wealth of information. What are then the market implications of consumers’ limited attention? Do market forces curb the ability of firms to exploit this limitation of consumers? Are consumers made better off by paying more attention?
My paper, “Competing for Consumer Inattention” (joint with Geoffroy de Clippel and Kareen Rozen, and published in the Journal of Political Economy) proposes a new game-theoretic framework for analyzing how firms interact with consumers, who purchase multiple types of goods, but are able to examine only a limited number of markets for the best price. In the equilibrium of our model consumers focus their limited attention on their highest expenses. Therefore, a firm's price can either draw or deflect attention to its market (by either being among the most expensive, or among the cheapest), and consequently, limited attention introduces a new dimension of cross-market competition, which has the following surprising implication: Increasing consumer attention can actually reduce consumer welfare. With less attention, consumers are more likely to miss the best offers; but enhanced cross-market competition decreases average price paid, as leading firms try to stay under the consumers' radar.
People's ability to articulate their wants is often far from perfect. Consequently, they may be able to give only a noisy signal of what they are looking for: when they try to describe their wants, they may be able to articulate only a general product category (e.g. movie genre); and their verbal descriptions may be vague, either due to inherent ambiguity (does "football" mean soccer or American football?) or because giving precise descriptions is hard ("the blonde singer who sounds like Rihanna"); when using a classified directory, consumers may struggle to fit what they look for into its rigid classification scheme. Put differently, consumers are unsure in which pool of providers they should conduct their search.
This form of friction can have important implications in markets for search platforms. A search platform is a site that attract firms from one side of a market into a "search pool" - a collection of firms with which consumers on the other side transact via some search process. A search intermediary (SI henceforth) is a market institution that provides such "search platforms". Real-life examples include human-resource or real-estate agencies, classified directories and, more modernly, online search engines and recommender systems. If the SI can obtain a perfect signal about the consumer's preferences, it can sort the two-sided market into homogeneous segments. In such a differentiated two-sided market, each signal functions as a distinct search platform, potentially with its own access price. However, when consumers' signals are imperfect, the two-sided market may fail to achieve an efficient outcome. In particular, since firms compete for access to search pools firms that offer popular products may crowd out firms that cater to minority tastes, i.e. the "long tail" of the preference distribution.
One common approach to attenuate this potential failure is to apply a “broad match”: take firms that attach themselves to a search platform associated with one signal, and introduce them into the search pool of consumers who are characterized by another signal. My research (joint with Ran Spiegler) addressed the following questions: Can a competitive, differentiated two-sided market implement an efficient outcome, under suitably designed broad matching? And when it cannot, will another mechanism perform better? Can we extend our framework to environments, which have yet to establish decentralized markets for search platforms?
In a paper (published in the American Economic Review), Ran and I developed a simple model of a "search designer" (an abstract description of a search engine), where consumers submit queries that are imperfectly indicative of the type of product they are looking for. We then construct an explicit auction-like mechanism that is efficient and maximizes the search designer’s profits. In this mechanism advertisers bid for keywords, and the designer augments this mechanism with a notion of “broad matching”. In a follow-up paper, we extend the main modeling ideas to the design of preference-based targeted advertising on social networks.