text_with_holes
stringlengths 166
4.13k
| text_candidates
stringlengths 105
1.58k
| A
stringclasses 6
values | B
stringclasses 6
values | C
stringclasses 6
values | D
stringclasses 6
values | label
stringclasses 4
values |
---|---|---|---|---|---|---|
5.1 Administrative Brazilian data
We use the Brazilian linked employer-employee data set RAIS. The data contain detailed information on all employment contracts in the Brazilian formal sector, going back to the 1980s. <|MaskedSetence|> <|MaskedSetence|> We also exclude the public sector because institutional barriers make flows between the Brazilian public and private sectors rare, as well as the military. <|MaskedSetence|>
|
**A**: These workers are defined as matching with the unemployment (or informal sector) in years we do not observe them.
**B**: Finally, we exclude the small number of jobs that do not pay workers on a monthly basis..
**C**: The sample we work with includes all workers between the ages of 25 and 55 employed in the formal sector in the Rio de Janeiro metro area at least once between 2009 and 2018.
|
CAB
|
BCA
|
CAB
|
CAB
|
Selection 4
|
The Hilbert space approach is only applicable to quadratic functionals, yet even for these cases, it provides a tight bound only for a limited range of parameters. <|MaskedSetence|> <|MaskedSetence|> The following lemma shows that our infinite-dimensional optimization problem can be approximated with a finite-dimensional optimization problem by reducing to only finite subalgebras. An algebra is called finite if it contains only a finite number of sets. <|MaskedSetence|>
|
**A**: We prove the following lemma in Appendix F..
**B**: This section discusses how linear programming duality helps us with our problem.
We rely on the basic finite-dimensional duality.
**C**: To find a tight bound in other situations, we need to use other techniques.
|
CBA
|
CBA
|
BCA
|
CBA
|
Selection 2
|
<|MaskedSetence|> Due to the duration nature of the setting, the difference decreases over time under the counterfactual of no intervention. <|MaskedSetence|> Therefore, in expectation standard diff-in-diff will estimate a negative treatment effect. However, the true treatment effect is positive, as seen by the the positive gap between the solid and dashed blue lines. In fact, in this model the bias of standard diff-in-diff is roughly four times the value of the true treatment effect. <|MaskedSetence|>
|
**A**: For simulation evidence of the poor performance of standard diff-in-diff under this model (and good performance of our proposed alternatives) see Appendix A..
**B**:
In the figure, we see that mean outcomes for group 1111 are greater than for group 2222 in the initial period.
**C**: In this example, the convergence is of a sufficient magnitude that the average difference in the observable factual outcomes is smaller over the post-treatment period than in the pre-treatment period.
|
BCA
|
BCA
|
CAB
|
BCA
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> Instead, it is always optimal for the consumer to send a vague message when her private posterior belief b𝑏bitalic_b is close to p¯¯𝑝\bar{p}over¯ start_ARG italic_p end_ARG.
The statement of the main result below and the argument behind it mirror the conclusions from the three-period example (Section 4.3): the reviewer’s desire to inflate the review of a marginally-bad item for sake of social experimentation results in garbled communication being optimal. The reviewer ends up issuing the same review for a product that she believes is barely good enough and for a product that is subpar but not bad enough for her to outright reject. <|MaskedSetence|> Conversely, if a reviewer is sufficiently confident in her judgement of the product quality, then it is optimal for her to report her impressions truthfully..
|
**A**: After reading such a review, the next consumer is exactly indifferent between buying the product and not, so buys it for sake of writing a review and benefitting the future generations.
**B**:
This section describes the equilibrium of the infinite-horizon version of the game, assuming it exists.
**C**: We show that perfect communication is not the optimal communication strategy.
|
BCA
|
BCA
|
ABC
|
BCA
|
Selection 1
|
A central part of decompositions involves estimating a counterfactual quantity which is what values of Y𝑌Yitalic_Y would a group have if it had the X𝑋Xitalic_X distribution of the other group; e.g. what would be the white’s wealth if they had the black’s distribution of average lifetime labor income. <|MaskedSetence|> <|MaskedSetence|> For instance, the range of the average lifetime labor income distribution was not the same for these races in the 1980’s and beginning of 1990’s. <|MaskedSetence|> These observations are not informative for building counterfactual wealth for the other race. Current decomposition methods either trim these observations, or assign virtually zero weight to them in their estimation process, even though they may contribute significantly to the overall observed gap in Y𝑌Yitalic_Y.
.
|
**A**: In order to build this counterfactual, decomposition methods find blacks who are similar to each white with respect to all observable characteristics other than race and use these blacks’ wealth and a model to predict it.
**B**: An issue with this approach is that it is not always possible to find blacks who are similar to whites with regards to X𝑋Xitalic_X.
**C**: There is a nontrivial set of blacks exhibiting average income of nearly zero, with no white households earning that low; and with a considerable amount of whites at the top of the lifetime earnings distribution, unmatched by blacks.
|
ABC
|
ABC
|
BCA
|
ABC
|
Selection 2
|
<|MaskedSetence|> Since the completion of a domain is the minimal invariant domain containing it, the completion reflects the complexity of possible aggregate behavior.
We formalize this intuition in application to Fisher markets: simple exchange economies where consumers with fixed incomes face a fixed supply of goods. <|MaskedSetence|> Computing an equilibrium of a Fisher market turns out to be a challenging problem even in a seemingly innocent case of linear preferences, thus limiting the applicability of pseudo-market mechanisms. We explore the origin of the complexity and demonstrate that computing equilibria can be hard, even in small parametric domains, if their completion is large. We show how to construct domains with small completion and describe an algorithm making use of this smallness. <|MaskedSetence|>
|
**A**:
For invariant domains, the aggregate behavior is as simple as that of a single agent.
**B**: Such markets are essential for the pseudo-market (or competitive) approach to fair allocation of resources (Moulin, 2019, Pycia, 2022) and serve as a benchmark model for equilibrium computation in algorithmic economics (Nisan, Roughgarden, Tardos, and
Vazirani, 2007).
**C**: The choice of a domain is interpreted as bidding language design.
.
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 4
|
Unlike individually testing each assumption, Frandsen, Lefgren, and Leslie (2023) propose a joint test for all assumptions underlying the judge leniency design. Their test leverages the property that, in the judge leniency design, the average outcome at the judge level should exhibit a smooth relationship with the propensity score (or the judge-level treatment probability). It ought to have a bounded slope, where the bounds depend on the limits of the outcome variable’s support. Although Frandsen, Lefgren, and Leslie (2023)’s testable implication has the desirable property that it assesses all the assumptions simultaneously, we show there is still relevant information in the data distribution essential for evaluating the judge leniency design’s validity, but not used in Frandsen, Lefgren, and Leslie (2023)’s testable implication. <|MaskedSetence|> In other words, our testable implications exhaust all the information in the observed data distribution. As seen in previous methods, non-sharp tests have practical virtue when there is no easily tractable characterization of the sharp testable implications of a model’s assumptions. If a non-sharp rejects, it conveys an informative result that the assumptions should be rejected. However, there are also important trade-offs to consider. First, a non-sharp test can have no power against certain violations since it does not consider all possible constraints on the data distribution. Second, different non-sharp tests can lead to discordant empirical results and potentially misleading interpretations of the estimand of interest (see Kédagni, Li, and Mourifié, 2020). <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Our sharp test addresses both issues as it is a consistent test built upon sharp testable implications and, therefore, a useful complement to the existing tests..
**B**: This difference is also demonstrated by the simulation and empirical studies reported in Section 4.
To the best of our knowledge, our test is the only sharp test available for assessing the validity of the judge leniency design.
**C**: For instance, two different non-sharp tests may produce conflicting results because they consider different aspects of the observed data distribution.
|
BCA
|
ACB
|
BCA
|
BCA
|
Selection 1
|
<|MaskedSetence|> However, these country-level aggregates may hide much important detail. Therefore, we use a second, micro-level dataset that records business transactions between companies in Hungary. These data are derived from the value added tax (VAT) reports collected by the National Tax and Customs Administration of Hungary. Firms are obligated to declare all business transactions in Hungary if the VAT content of their operations exceeds ca. <|MaskedSetence|> The dataset is anonymized and available for research purposes through the HUN-REN CERS Databank. <|MaskedSetence|> However, we will also use the micro data themselves to analyze colocation at the firm level..
|
**A**: It has been used to construct interfirm supplier networks to study production processes, systemic risks and interdependencies between companies at the national scale \parencitediem2022quantifying, lorincz2023transactions, pichler2023science.
We aggregate firm-to-firm supplier transaction values between 2015 and 2017 to the level of pairs of three-digit industries to derive a dataset that is similar in structure to the IO tables in the WIOD data.
**B**: 10,000 EUR in that year.
**C**:
Using WIOD tables makes our analysis comparable to previous studies, which have also relied on aggregate input-output tables \parenciteegk2010coagglom, diodato2018coagglom.
|
ABC
|
CBA
|
CBA
|
CBA
|
Selection 3
|
<|MaskedSetence|> Thus, there is a worker and another stable matching in which she could be better off. <|MaskedSetence|> <|MaskedSetence|> In each stage of the algorithm, the firm left alone in the previous stage chooses the best worker willing to be employed with this firm (such a worker always exists). This generates a new empty position in another firm. This process continues until the worker who disrupted the initial stability of the market receives an offer. Of course, this offer improves upon her initial situation.
.
|
**A**:
3 Algorithm
Consider a labor market where all agents are matched under a stable matching and that such stable matching is not the worker-optimal.
**B**: This worker can resign her position in order to wait for a better offer.
**C**: We show in this section that such a better offer always arrives and that the market is re-stabilized into a matching in which no worker is worse off.
|
ABC
|
ABC
|
ABC
|
ACB
|
Selection 3
|
The article “Rehabilitation of Time Dimension of Investment in Macroeconomic Analysis” by Tsiang (1949) is interesting for our topic for various reasons: it attributes the neglect of the period of production to three factors (pp. <|MaskedSetence|> First, the discovery that the period of production cannot be measured exactly. Second, the fact that the concept of the period of production was static. As a consequence, “the baby [of the time dimension] is cast away with the bath water” of the period of production (p.204). Thus, Keynes neglected time in his General Theory. <|MaskedSetence|> Hicks’ attempt to draw attention to the importance of the role of time did not meet success.
In the Hayek’s view, if we want to express processes in a mathematical form, we have to use differences or differential equations. An alternative consists of diagrams in which time is one of the dimensions. Those are what Hayek used in The Pure Theory of Capital (1941). For us, the question is if the simulations with ABM depict or approximate sufficiently accurately these or similar diagrams. We think the answer is affirmative. <|MaskedSetence|>
|
**A**: And third, the influence of Keynes drove other economists to neglect time, too.
**B**: Curiously enough, this is consistent with Hayek’s own view (Hayek, 1981,2012), in which he speaks of production processes in terms of rivers and their tributaries: to Don Lavoie (personal communication) he had expressed his interest in computer simulations of production processes..
**C**: 204-5).
|
ABC
|
CAB
|
CAB
|
CAB
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> The network is constructed using the correlations of the closing stock prices, obtaining a Minimum Spanning Tree (MST) via a specific distance function and adding back “high-value links”. In this work we further study the analytical properties and empirical results of this indicator. In particular, we seek to verify whether or not this estimator satisfies the aforementioned properties (1-3).
The Ollivier-Ricci (O−RicciORicci\operatorname{O-Ricci}roman_O - roman_Ricci) curvature of the associated network of Sandhu-Georgiou-Tannenbaum (based on Boginski et. al. <|MaskedSetence|> In this document we argue that the proposed object is not an economic risk indicator but rather a good ex-post metric which can show the size and periods of a financial crisis. More specifically, we will show that the O−RicciORicci\operatorname{O-Ricci}roman_O - roman_Ricci curvature of the constructed network does not indicate tendencies towards a crisis but rather accurately identifies the size and length of the crisis.
.
|
**A**: and Tse-Liu-Lau ) is called a “crash hallmark”.
**B**: The recent works Sandhu-Georgiou-Tannenbaum and Samal et.
**C**: al have proposed a very interesting object as an indicator of market fragility: the average Ollivier-Ricci curvature of a specific network.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 1
|
Beyond capturing linear relationships, I employ a Random Forest model as a secondary economic prediction model. <|MaskedSetence|> It is a powerful tool for capturing nonlinear relationships and has been widely applied in economic research for variable selection, forecasting, and causal inference\footfullcitecoulombe2020macroeconomy. The Random Forest algorithm is well-suited for handling complex relationships and interactions within the data, providing a more comprehensive understanding of the factors influencing the outcome. As such, I also use a Random Forest Model to compare the model performance across all the datasets.
Synthetic Data Generation:
To complete the Model Selection process, I will now outline the unique approach I take to generate synthetic data to fill data gaps within the Womply business dataset. <|MaskedSetence|> While the original dataset contains missing daily values for merchants_all variable, it still has enough weekly values for me to be able to use it as a target variable to train the random forest model. <|MaskedSetence|>
|
**A**: Then, I use the trained model to predict (or impute) the missing values within the original dataset..
**B**: Random Forest is an ensemble learning method that constructs multiple decision trees during training and outputs the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees\footfullciteTripp2023DiabetIA.
**C**: I train a Random Forest Model on the second dataset mentioned in the pre-processing section (original dataset with missing rows removed) as it represents the cleanest form of real data.
|
CAB
|
BCA
|
BCA
|
BCA
|
Selection 2
|
We discover that there is a massive difference between the agent’s welfare in these two classes. <|MaskedSetence|> Because the posterior is so extreme, the principal can offer the agent a single action that no Bayesian would take, but that the agent with the extreme (and incorrect) posterior would. By scaling up the penalties to taking this action in “bad states” the agent’s expected loss is limitless. <|MaskedSetence|> The agent’s ex ante expected payoff cannot be worse than that from her outside option (which we normalize to zero). <|MaskedSetence|> If the agent can be exploited, there must be some signal realization that leads her to take the “wrong” action from the perspective of the Bayesian agent who sees that signal realization. But for an underreacting agent, this means that this “wrong action” is attractive at the prior and therefore at least one other Bayesian posterior. To put differently, any action that hurts the agent following some signal realization benefits her at another, and the latter gain necessarily outweighs the former loss. This is a bit of an over-simplification but the essence is that the “direction” of a mistake for an underreacting agent precludes exploitation.
.
|
**A**: If there exists a signal realization that produces an overly extreme posterior–formally, that produces a posterior that lies outside of the closed convex hull of the support of the Bayesian posteriors–the principal can exploit the agent to an arbitrarily large degree.
**B**: The explanation for this impossibility is more subtle than that for the overreaction result.
**C**: A corollary of this result is that if an agent overreacts to a signal realization, she can be exploited.
On the other hand, if an agent underreacts to information, if all of her posteriors are (weakly) closer to the prior than the Bayesian posteriors, she cannot be exploited.
|
ACB
|
ACB
|
CAB
|
ACB
|
Selection 1
|
Green and Srivastava (1986) and Kubler et al. (2014), building on Varian (1983), characterize expected utility maximization with concavity and Polisson et al. <|MaskedSetence|> <|MaskedSetence|> Echenique et al. <|MaskedSetence|> For an excellent survey of the revealed preference literature for choice under risk, uncertainty, and time see Echenique (2020).
.
|
**A**: Echenique and Saito (2015) characterize subjective (i.e. the state probabilities are not given) expected utility maximization.
**B**: (2019) characterize exponential discounted utility.
**C**: (2020) characterize expected utility maximization without concavity.
|
CAB
|
ABC
|
CAB
|
CAB
|
Selection 3
|
Among the commonly used indices, we show that the Afriat, Varian, Swaps, and Least Squares indices are continuous but not accurate while the Houtman-Maks index is accurate but discontinuous. That these goodness-of-fit indices have shortcomings is somewhat understood in the literature. Indeed, Andreoni and Miller (2002) and Murphy and Banerjee (2015) point out that the Afriat index is inaccurate and Halevy et al. <|MaskedSetence|> <|MaskedSetence|> Similarly, Fisman et al. <|MaskedSetence|> What has been less understood is that the inaccuracy of the Afriat and Varian indices and the discontinuity of the Houtman-Maks index is not a defect of these particular goodness-of-fit indices but rather the outcome of an inevitable trade-off between accuracy and continuity.
.
|
**A**: (2007) (in appendix III) point out that the Houtman-Maks index has a continuity problem.
**B**: (2018)222See footnote 10 of their paper.
**C**: point out that the Varian index is inaccurate.
|
BCA
|
BCA
|
CAB
|
BCA
|
Selection 4
|
We also develop a new estimation approach, namely the 2SLS approach. <|MaskedSetence|> In particular, it enables us to consider various extensions. For example, it can accommodate time-varying parameters while allowing general heterogeneity patterns in the first stage. <|MaskedSetence|> Generic clustering approaches such as that by \textciteYuGuVolgushev2022 cannot be applied here because it is impossible to obtain preliminary estimates from unit-by-unit estimation in the presence of time-varying parameters. <|MaskedSetence|>
|
**A**: We note that controlling for time-varying group-specific intercepts is the motivation of \textciteBonhommeManresa2015.
**B**: This approach has several advantages over existing methods.
**C**: It is thus necessary to develop estimation methods tailored to handle time-varying parameters.
.
|
CBA
|
BAC
|
BAC
|
BAC
|
Selection 3
|
The integration of sensing and communications functions (aka as ISAC) is a major feature of the advent of the 6G infrastructure (Wei et al., 2023; Kim
et al., 2022). <|MaskedSetence|> <|MaskedSetence|> The co-habitation of different functions in the same network has posed several resource sharing problems. For example, a time-sharing discipline has been proposed and optimized in Xie
et al. (2023). Signal design has been optimized in Wu
et al. (2023). The optimal allocation of the overall transmitting power between the sensing and the communication tasks has been investigated in Liu et al. <|MaskedSetence|> (2018). In all these works, optimization is achieved just under a technical performance viewpoint with no economic considerations. A high-level analysis of the 6G network infrastructure and its cost drivers (coverage and antennas, backhaul, spectrum and edge computing units) is conducted in Kokkinis et al. (2023) for some use cases, where sensing is, however, not considered..
|
**A**: The feasibility of using 6G frequencies for passive radars has been examined in Lingadevaru et al.
**B**: (2022), while the capacity-distortion trade-off in a memoryless channel has been evaluated in Kobayashi
et al.
**C**: (2022).
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> According to the rules of these cartels, a user must comment on and like the five posts preceding their own submission before submitting a post to the cartel for engagement. This rule allows us to clearly identify which cartel members were bound to engage with which Instagram posts. <|MaskedSetence|> Similarly, we observe, instead of having to infer, which engagement originates from the cartel according to the cartel rules. The Telegram cartels include 220,893 unique Instagram posts that we were able to map to 21,068 Instagram users..
|
**A**: In other words, we directly observe, instead of having to infer, which posts are included in the cartel.
**B**: This history provided us with three relevant pieces of information for each submission: the Telegram username, Instagram post shortcode, and the time of submission.
**C**:
Telegram cartel history.
From Telegram, we collected the communication history of nine cartels: six general interest cartels and three topic-specific cartels: fitness & health, fashion & beauty, travel & food.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 4
|
Refine the vocabulary for the next epoch. To achieve this, we employ word embeddings to reduce the vocabulary size while retaining information from the active tokens of the best solution in the preceding step. A word embedding space maps tokens into high-dimensional vectors, where similar tokens exhibit shorter distances between their vectors. <|MaskedSetence|> Then, for each dimension, we calculate the cosine similarity between the average token embedding of that dimension and all the tokens in the original vocabulary of size V𝑉Vitalic_V. We retain tokens with a cosine similarity larger than 0.2. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: The set of unique, similar tokens across all dimensions, constitutes our new vocabulary.
**B**: Effectively, this step narrows down the search space to a more relevant vocabulary, considering that the optimally selected tokens are indicative of the relevant tokens concerning the dependent variable.
4..
**C**: First, we compute the average token vector for each dimension of the best solution in step 2.
|
CAB
|
CAB
|
ABC
|
CAB
|
Selection 1
|
<|MaskedSetence|> In this work, we extend RetailSynth to enable the training and evaluation of RL agents that target promotions (coupons) to individual customers. We propose an environment where the agent is trained using offline batch data to target store-wide coupons to customers at discrete time steps. In alignment with industry practice, coupons are set up as discrete actions across a range of discount levels and evaluated based on their impact on customer revenue over the evaluation period, while monitoring secondary metrics such as the profit margin impact, the number of categories a customer purchases, and the fraction of customers active at the end of the evaluation period. We characterize the environment using static baseline policies where all customers receive the same coupon and then compare the performance of the baseline policies to personalized policies learned by the RL agents. We segment the customers based on their latent price sensitivity and show that personalized policies typically target less aggressive discounts to less price-sensitive customers. Based on our observation that price-insensitive customers still often receive large discounts, there do appear to be opportunities to improve agent performance on this benchmark.
To our knowledge, our work is the first to benchmark RL agents on simulated retail customer shopping trajectories. <|MaskedSetence|> <|MaskedSetence|> We intend this paper to serve as a blueprint for how to simulate AI agents that optimize the end-to-end retail customer journey. The remainder of our paper is organized as follows: Section 2 gives a detailed overview of the simulation environment; Section 3 describes the agent training and evaluation experiments; and Section 4 describes challenges and directions for future work.
.
|
**A**: We also provide insights into which customer features effectively summarize the sparse transaction data and a deep dive into metrics to consider prior to deployment.
**B**: It provides much needed guidance to practitioners on the potential uplift of deploying coupon-targeting agents in a multi-category retail environment.
**C**: We previously introduced RetailSynth, an interpretable multi-stage retail data synthesizer, and showed that it faithfully captures the complex nature of the retail customer decision-making process over the full journey from choosing to visit a storefront to deciding exactly which product and how much to purchase (Xia et al., 2023).
|
CBA
|
CBA
|
BCA
|
CBA
|
Selection 4
|
[erdil2023china] elaborates further on how the TFP estimates from [feenstra2015next] might change if these factors are taken into account. <|MaskedSetence|> This is problematic, as the exact value of the returns to research r𝑟ritalic_r is of interest in many cases, and the rough approximation from dividing the growth rates in outputs and inputs shows how overestimating or underestimating the output growth rate can lead to inaccurate estimates of r𝑟ritalic_r.
Besides TFP, similar problems occur for other latent variables. For example, in the case of software R&D, we often want to find a multiplicative metric of “software efficiency" that changes in a particular domain over time, just as TFP is a measure of “resource use efficiency" with the same character. As this variable is not directly measured, it is also a latent variable. <|MaskedSetence|> Software innovations could lead to greater compute savings for larger applications than smaller ones, e.g. by improving the complexity class of a particular problem; and they could be heterogeneous across different problems in the same domain, e.g. <|MaskedSetence|>
|
**A**: The changes are substantial for many countries and can make the difference between TFP remaining flat as opposed to showing sustained growth over long periods of time.
**B**: a narrow benchmark might become easier to beat with fewer resources while a broader benchmark sees less progress.
.
**C**: However, turning it into a single-dimensional efficiency multiplier is difficult because of the multidimensional and scale-dependent nature of software progress.
|
ACB
|
ABC
|
ACB
|
ACB
|
Selection 1
|
<|MaskedSetence|> Firstly, in some markets, the visual aesthetic matters a lot. It is possible to capture these visual elements by using pre-trained embedding models on product images and text. These embeddings can explain up to half the variation in sales and two-thirds the variation in prices. They can also intuitively partition the product space and help us discover products close to each other. It is possible to also use these embeddings in economic models of brand choice or consumer demand. Estimation results from logit-based models show that incorporating consumer heterogeneity matters - the price sensitivity of consumers varies from -0.5 to -0.1. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: That is not a small amount of variation and it may be useful for retailers to be able to differentiate such consumers.
.
**B**: That is, for a 50% reduction in the price of a product, we foresee some sections of consumers increase purchases by only 5%, but other sections increase purchases by 25%.
**C**: 8 Conclusion and Next Steps
In this paper, we find a few interesting insights.
|
CBA
|
BCA
|
CBA
|
CBA
|
Selection 4
|
Homogeneity implies that information spillovers are only a function of the distance from types, where information decays at a rate proportional to the distance.
A discussion about the choice of the model is in order. First, note that a more general model would include a two-dimensional type θ=𝜃absent\theta=italic_θ =(dimension of interest, willingness to pay). <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Moreover, information spillovers imposed through (M) and (H) seem a natural approximation of the learning process as the investment/project size differs..
|
**A**: Therefore, it seems suitable to pin down willingness to pay with the investment/project size, hence reducing private information to one-dimensional.
**B**: In the consulting example, the dimension represents the size of investment/project, which induces a natural ordering of the states (no such ordering is natural with consumer attributes).
**C**: One can find settings in which this is suitable, e.g., consider purchase of targeted ads, where dimension of interest is consumer attribute like gender, income, etc.
|
CBA
|
BCA
|
CBA
|
CBA
|
Selection 4
|
<|MaskedSetence|> Convictions on Violent Crimes). <|MaskedSetence|> [5] Another subset I investigate is all crimes except quality-of-life crimes. <|MaskedSetence|> [2] [26] [4]
.
|
**A**: Using the definition put forth by the state of California, crimes in the case actions dataset categorized as violent crimes include — Assault and Battery, Assault, Robbery, Other Sex Law Violations, Weapons Charges, Hit-and-Run, Homicide, Attempted Homicide, Kidnapping, Arson, Rape, Lewd or Lascivious Behavior, Manslaughter, Vehicular Manslaughter, Child Molestation, Obscenity, Other Enhancements.
**B**:
Before temporal aggregation, I filter the prosecution dataset to independently analyze crime and SFDA action subsets (e.g.
**C**: Offenses that fall into this category are — Marijuana Possession, Liquor Laws, Disturbing Peace, Disorderly Conduct, Malicious Mischief, Trespassing, Prostitution, Petty Theft, Burglary, and Vandalism.
|
BAC
|
CAB
|
BAC
|
BAC
|
Selection 4
|
The drifting diffusion model (DDM) is also popular for using stopping time. While both the RAS and the DDM use stopping time, the two models do so from quite different angles. There are fundamental differences in the underlying assumptions of the RAS and the DDM. In the DDM, only two alternatives are present and one alternative is chosen if the net evidence is above a particular threshold. In contrast, in the RAS, multiple alternatives are present; and the decision-maker will choose the most preferred option if it is in their consideration set. The combination of noise and a threshold in the DDM leads to the possibility of mistakes if the threshold is very low or if there is an unusually high level of noise. <|MaskedSetence|> <|MaskedSetence|> In general, in random attention models, researchers rule out the possibility of mistakes, and the choices decision-makers make are based only on their preferences and the consideration set at the time they make the decision. This difference leads the DDM and the RAS to have different focuses of study. The DDM has a focus on choice accuracy, while the RAS has a focus on the preference distribution.
The RAS is close to the random attention model (RAM) (Cattaneo et al.,, 2020) in terms of the nonparametric assumption on attention rules. The RAM uses a nonparametric restriction named monotonicity in the size of the attention-forming process. It states that the probability of considering a given set cannot increase if the choice set grows. <|MaskedSetence|> Therefore, their framework is suitable for data on individuals’ choices but not so much when the population has a distribution of different preferences.
.
|
**A**: The RAM provides a general framework to test different models of stochastic consideration when preferences are homogeneous.
**B**: In the RAS, the assumption of time monotonicity is exogenous to decision-makers’ attention and is conditional on time, although the stopping time at the decision is endogenous.
**C**: However, in the RAS, decision-makers are assumed to make the best choice among the alternatives that they actually consider.
|
CBA
|
CBA
|
CBA
|
CAB
|
Selection 1
|
In detail, COVID-19 caused a shock to GDP that was in many ways unprecedented in recent history and tested the capacity of current methodologies in the presence of extraordinary situations. The main big challenges are both in how the forecasting methods behave but also how we should evaluate the forecasting that common models provide the policymakers. The first challenge has been strongly studied in the literature, where Huber et al. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: (2023) argue that nonlinear methods may better accommodate extreme situations.
**B**: Differently, Foroni
et al.
**C**: (2022) and Schorfheide and
Song (2021) stress the resilience of popular models, such as mixed frequency or dynamic factor models to accommodate the severity of recessions.
.
|
ABC
|
ABC
|
ABC
|
BCA
|
Selection 1
|
Table 1 provides summary statistics of the ETF data. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The median ETF has $150 million dollars under management, holds 62 stocks in its portfolio, has an active share of 31% (i.e. it tilts 31% of its assets away from the value-weighted market portfolio), receives a daily inflow of $1.6 Million (0.13% relative to assets).
Table 1: Summary Statistics..
|
**A**: Our sample consists of 1868 ETFs with 1,175,487 ETF-day observations and 34,897,091 ETF-day-stock observations.
**B**: Our main analysis focuses on ETF data, for which daily holdings are available.
**C**: While ETFs are exchange-traded vehicles that generally attempt to replicate a rules-based index, the portfolios of thematic ETFs are subjectively chosen by a fund manager.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 3
|
To bypass these challenges, we draw inspiration from the consumer behaviour techniques routinely used to estimate consumers’ willingness-to-pay for a given product Kohli and Mahajan [1991], Train [2009], Miller et al. [2011], and we reinterpret individual-level thresholds to adoption in terms of a discrete-choice problem Train [2009]. In doing so, we connect the complex contagion literature with the individual-level perspective (see Fig. 1A).
Fig. 1: Estimating individual–level thresholds. (A) The complex contagion theory assumes that individual–level adoption choices are determined by the decision-makers’ threshold. On the other hand, individual-level perspectives focus on estimating individuals’ utilities of adopting from choice data. Our work reconciles the two perspectives by reinterpreting the individual-level thresholds in terms of individuals’ attribute and social utilities. (B) For a susceptible adopter, the status-quo utility is initially larger than the utility from adopting. As the number of adopters increases, so does her social utility. The threshold is defined as the minimal level of social signal at which the utility from adopting exceeds the utility from not adopting. (C) For both experiments, the individual-level thresholds estimated from experimental data hold out-of-sample predictive power, as illustrated by their superior accuracy compared to a random-threshold baseline. <|MaskedSetence|> (E) The distribution of the proportion of independent adopters (as opposed to susceptible adopters) is significantly lower for the app adoption experiment (AA, in blue) than for the policy support experiment (PS, in orange), which highlights the importance of context for the distribution of individual thresholds. <|MaskedSetence|> <|MaskedSetence|> Data in this panel is based on a sample of products, as described in Supplementary Note S2.2.
.
|
**A**: (D) In general, different products exhibit different threshold distributions, as illustrated by the two examples provided here (instant messaging app in blue; energy policy in orange).
**B**: There are significantly more observations that fall within the gray stripe in the AA experiment than in the PS experiment, which explains the higher percentage of susceptible adopters in the AA experiment.
**C**: (F) An individual is susceptible for adopting a given product when her resistance is positive and lower than the marginal utility of social signal, which corresponds to the gray stripe in the γ−R𝛾𝑅\gamma-Ritalic_γ - italic_R diagram.
|
ACB
|
ACB
|
ACB
|
BAC
|
Selection 3
|
Formally incorporating this pilot data in our design problem leads to a multi-stage or adaptive framework, where past experimental data can influence, and improve, future site selection.
In our application, we choose a small set of migration corridors from among hundreds of candidates across South Asia in which to experimentally evaluate a program that facilitates the adoption and use of a mobile banking technology.
Lee et al. <|MaskedSetence|> The intervention made it easier for migrant workers to send and receive remittances, and an experiment showed an increased volume of urban-to-rural remittances and reduced rural poverty. <|MaskedSetence|> The goal of the new experiments is to derive policy recommendations for adopting the intervention in all the candidate sites. <|MaskedSetence|>
|
**A**: Specifically, the experiments are chosen to inform a rule for each of the hundreds of migration corridors which maps each corridor’s characteristics to an up-or-down recommendation for whether, ultimately, the technology adoption program should be implemented there..
**B**: (2021) report on the original intervention, which was implemented in a single migration corridor in Bangladesh.
**C**: It is unclear, however, when and where those results might generalize.
|
BCA
|
BCA
|
BCA
|
ABC
|
Selection 1
|
To test these hypotheses, we empirically investigate the diffusion of information about importing through the domestic production network using a dataset provided by the Spanish Tax Agency (AEAT), which contains data gathered from Value Added Tax (VAT) declarations. This dataset includes anonymized information about the basic characteristics (sales, number of employees, sector, labor costs, location, etc.) of the whole population of Spanish firms, together with the value of their imports for two aggregate geopolitical areas (EU and extra-EU) and all annual domestic transactions between them in the amount larger than 3005 Euros. <|MaskedSetence|> We then empirically examine whether the (geographical area-specific) import experience of a firm’s domestic trade partners, differentiating between its providers and customers, is relevant for explaining its decision to start importing (from this area).555 We, therefore, focus on firms’ importing behavior at the extensive margin, i.e. import starters, and leave the investigation of the intensive margin for future work. In principle, information transmission about importing through the firm-to-firm domestic network could also be relevant for explaining firms’ behavior at the intensive margin. For example, this information could be valuable for decreasing the variable costs of importing or simply because a better matching with providers stimulates a higher volume of imports.
We estimate these peer effects in a linear-in-means framework, therefore assuming that the firm’s decision to start importing from a given origin/area is affected by the firm’s characteristics, a weighted average of its peer characteristics, and the weighted averages of their importing status. <|MaskedSetence|> <|MaskedSetence|> Bramoullé et al., (2009)) we assume that peer effects operate with time lag – it takes time for a firm to utilize the importing relevant knowledge acquired from their peers. The same assumption is made in Bisztray et al., (2018) and Dhyne et al., (2023).
.
|
**A**: The network determines the weights.
**B**: By leveraging this dataset, we construct the empirical Spanish domestic firm-level production network for each year during the 2010-2014 period.
**C**: Unlike the standard linear-in-means setting (i.e.
|
BAC
|
BAC
|
ABC
|
BAC
|
Selection 4
|
As complex prediction models play ever more essential roles across nearly every sphere of inquiry, it has become more and more pertinent to answer the question: how much should one rely on an AI/ML model? By connecting this to a deep literature on portfolio optimization and the inference on predicted data paradigm [20], we suggest a new framework to evaluate the efficacy of an AI/ML model. <|MaskedSetence|> Although we show this in the context of downstream statistical inference, the asset allocation framework has the potential to be extended to many other metrics used to evaluate AI/ML quality such as accuracy, fairness, and interpretability.
As researchers integrate AI technology across all levels, from domains where data collection is cheap and plentiful (such as web-based text or image data) to domains where data are siloed and expensive (such as healthcare), our framework provides new language to evaluate how to spend limited resources. This can be helpful in situations such as clinical trials, where there is the potential to intelligently replace human subjects with AI-approximated datapoints. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: We show that with this toolkit, one can create actionable decision rules based on your budget, the cost of model training, the cost of data, and the performance of the models, that can accurately determine the most efficient way to keep AI models up-to-date.
**B**: This can even be helpful in situations where the economics of data within a field is in flux.
.
**C**: This has the potential to save resources while ensuring that traditional efficacy and safety requirements are met.
|
BAC
|
ACB
|
ACB
|
ACB
|
Selection 3
|
Next I analyze an attention intervention that makes a specific alternative always be considered. <|MaskedSetence|> In contrast, for the limited consideration model, welfare bounds are the trivial bounds [0,∞)0[0,\infty)[ 0 , ∞ ). <|MaskedSetence|> The reason for this is simple: individuals who do not consider an alternative may have an arbitrarily high latent utility shock for it. This makes analysis similar to the question of assessing the welfare impact of a new good, which requires enough structure. <|MaskedSetence|>
|
**A**: The benchmark model of exogenous consideration is useful because this result formally establishes that a model that generalizes this setup in one way – such as by relaxing the exogenous consideration assumption – must specialize in another to guarantee a finite welfare bound.
.
**B**: The model cannot rule out no welfare change, allows an arbitrarily high welfare change, and the bounds do not depend on choice probabilities.
**C**: In the classic model there is no scope for a change.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 1
|
<|MaskedSetence|> A randomized version of that assignment algorithm can be used in order to give a best of both worlds (BoBW) result – a randomized assignment algorithm that assigns each agent not more than her proportional share ex-ante, and not more than her RRR share ex-post. This is the content of Theorem 45. Its proof follows principles that appear in several previous works. Among them, two are most relevant to our work. The general structure of our randomized assignment algorithm follows a pattern introduced in [Azi20]. The use of this pattern simplifies the proof of correctness (compared to some other alternatives that also work). <|MaskedSetence|> <|MaskedSetence|> The proof of Theorem 45 observes that the same algorithm also meets the additional benchmark of giving each agent not more than her RRR share ex-post.
.
|
**A**: The randomized assignment algorithm that we use was already sketched in Section 1.6 of [FH22] (the arxiv version of [FH23]).
**B**:
6.3 Tight Best of Both Worlds Result for Additive Chores
In the proof of Theorem 44, we showed that there is a deterministic assignment algorithm that computes an assignment that is feasible with respect to the RRR share.
**C**: There it was explained that with this algorithm, each agent suffers a cost that is at most her proportional share in expectation, and no more than twice her anyprice share (APS) ex-post.
|
BAC
|
BAC
|
BAC
|
ACB
|
Selection 3
|
4.1 Data
The data used in this analysis was obtained from stakingrewards.com. The sample consists of daily observations of various staking parameters for the Ethereum, Solana, Polkadot, Cardano, Avalanche, and Cosmos blockchains over a two-year period, from 1 January 2022 to 31 December 2023. Due to concerns around data quality, we do not include Algorand in the following analysis. In addition, data on the staking parameters for each blockchain over the same two-year period was collected through desk research. The sources of these data are primarily the websites of the respective blockchains. Finally, some model specifications incorporate data on the price of Bitcoin. <|MaskedSetence|> <|MaskedSetence|> After accounting for missing values, the sample contains 550 observations. <|MaskedSetence|>
|
**A**: Table 3 in Appendix D reports summary statistics.
.
**B**: The sample of daily data is aggregated to a weekly level to reduce noise.
**C**: These data were obtained from coinmarketcap.com and again include daily observations of the same time period.
|
CBA
|
CBA
|
CBA
|
BCA
|
Selection 1
|
<|MaskedSetence|> Individual factors are added into the first-stage Heckman selection model, including age, education years, marital status, health conditions, family size, and number of children, and the elderly. <|MaskedSetence|> The results can be explained as follows. digital finance can broaden the financing channels of women (and men) who have a lower family economic status level, provide more extensive and convenient financing services and information, and promote women (and men)’s inclusion to enhance the wages of women with low economic status compared with those with higher economic status. <|MaskedSetence|> With digital finance development, the gender wage gap can be narrowed among the group with a lower family economic status.
.
|
**A**:
The results of the heterogeneity analysis of women and men are based on the Heckman selection model and are reported in Table 8.
**B**: Regarding family economic status, digital finance has a significant positive impact on the wages of women (and men) at a lower family economic status level, as the coefficients of the interaction between digital finance development and family economic status (low) in columns (1) and (2) of Table 8 are both significantly positive.
**C**: The seemingly unrelated estimation (SUEST) shows that the coefficients among the group of women and men with lower economic status have significant differences (F = 113.06, p = 0.0870).
|
ABC
|
ABC
|
ABC
|
BAC
|
Selection 3
|
In the second literature, generally, there are more papers about the merger of airlines from a multitude of perspectives, such as economy of scale as incentive of merger (Merkert and Morrell, 2012), productivity and efficiency (Khezrimotlagh et.al., 2022), consumer welfare related to flight frequency (Richard, 2003), and network structure (Shaw and Ivy, 1994). Some of the empirical literature addresses the impact of mergers on prices and market power. However, few of them focus directly on the collusive behavior of airlines. For example, Kim and Singal (1993) examines price changes associated with airline mergers, and find that price increased on routes served by the merging firms relative to a control group of routes unaffected by the merger. <|MaskedSetence|> <|MaskedSetence|> The paper reveals that the merger only caused a small increase in the fares of non-stop flights, which may be because two carriers have little overlapping services before their mergers. Luo (2014) further suggests that the effect on ticket prices will be more prominent in the fare of connecting flights, in which two carriers have more overlapping services. <|MaskedSetence|> Notably, if we see the price increase as a signal of collusive pricing behavior, the reasoning provided by Luo (2014) is contrary to the reasoning by Ciliberto et. al. (2019), because with little overlapping services, the merged carrier is ought to have considerably more multimarket contacts than before, which tends to increase the collusive behavior in the prediction by Ciliberto et. al. (2019).
.
|
**A**: The paper also points out that the (small) positive price effect in non-stop flights may be caused by the exit of LCC as an effect of merger.
**B**: The impact of increasing market power of the carrier offsets the gain in consumer welfare resulting from the efficiency increase after the merger, making consumers worse off.
Another notable literature is Luo (2014), which focuses specifically on the influence of the Delta/Northwest merger in 2008 on the entire aviation market.
**C**: The paper attributes such change to the increasing market power of the firm.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 2
|
<|MaskedSetence|> After the disaster, affected workers may have decided to move to locations and jobs that I use as the control, potentially distorting any measurement when employing the two groups. I address this issue with fixed effects interactions between social identifiers and the location of the individual. When spells related to the individual-municipality relationship are included in the model, the parameters absorb any effect related to geographical movement. <|MaskedSetence|> The disaster itself, in other words, the dam rupture and the water contamination that eventually took over the Doce River. Separating these effects in this context is challenging, particularly when I ultimately intend to disentangle TFP and Rosen-Roback effects. <|MaskedSetence|> Further explanation is provided in Section 7..
|
**A**:
Although not as urban as the state capital metropolitan region, the region itself is sufficiently integrated to generate spillover effects due to individuals moving across municipalities.
**B**: Additional details are provided in Section 5.
Moreover, two effects may be at play when measuring the Mariana Disaster.
**C**: I address this issue by employing the empirical analysis in two other regions affected by the disaster: the continuation of the Doce River until the Minas Gerais border and the river estuary region in the Espírito Santo state.
|
ABC
|
ABC
|
CBA
|
ABC
|
Selection 2
|
<|MaskedSetence|> If a half space covers a region that is also covered by another set of half planes, then we can remove the first half plane and the covered region will be the same. <|MaskedSetence|> <|MaskedSetence|> We define a minimal configuration to be one where no half plane can be removed and the remaining half planes still cover the polytope, i.e. each half plane contains a point that is uniquely covered by it.
.
|
**A**: Being covered by a set of half spaces corresponds to being dominated by a mixed strategy.
**B**:
The key observation is that we want to remove redundant half spaces.
**C**: A half space being covered by another half space corresponds to dominance by a pure strategy.
|
BCA
|
BCA
|
BCA
|
ACB
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> With the same expectation, as the distribution is skewed, a greater variance indicates more skewness in the price distribution. We discretize the price distribution into 30 scenarios with the occurrence probability of each scenario. Fig. <|MaskedSetence|>
|
**A**:
6.2 Prudent demand
We then design a case with 24 stages, i.e., T=24𝑇24T=24italic_T = 24 (time slots), to verify our theoretical analysis.
**B**: We set 6 skewed price distributions with increasing variance and the same expectation, and assume that the event happened at the 10th stage.
**C**: 3 shows the detailed price distributions.
.
|
ABC
|
BCA
|
ABC
|
ABC
|
Selection 1
|
<|MaskedSetence|> Notes: Panel (a) plots the 2×2222\times 22 × 2 Wald-DIDs in WLS against those in OLS for all DID-IV designs. <|MaskedSetence|> In Panel (a), the size of each point is proportional to the corresponding weight in OLS. In Panel (b), the size of each point is proportional to the corresponding Wald-DID estimate in OLS. In both panels, the dotted lines represent 45-degree lines. In both panels, Unexposed/Exposed designs yield blue circles, Exposed/Not Yet Exposed designs yield yellow triangles, and Exposed/Exposed Shift designs yield red squares. <|MaskedSetence|>
|
**A**:
Figure 4: Comparisons of 2×2222\times 22 × 2 WaldDIDs and weights between OLS and WLS in the setting of Miller and Segal (2019).
**B**: In both panels, the dotted lines represent 45-degree lines.
.
**C**: Panel (b) plots the decomposition weights in WLS against those in OLS for all DID-IV designs.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
<|MaskedSetence|> In both sub-samples the distributions are strongly skewed and fat-tailed, with varying characteristics that do not align with standard parametric families. Whereas this behaviour is readily observed for these two samples, the macroeconomic uncertainty literature provides ample evidence that the same characteristics generally apply to the conditional forecast distribution of economic aggregates. <|MaskedSetence|> (2024), Korobilis et al. (2021), Korobilis (2017), Lopez-Salido and
Loria (2024), and Tagliabracci (2020) and for GDP by Adrian
et al. (2019) and others. <|MaskedSetence|> To capture the entire forecast distribution, the literature estimates a grid of quantile regressions and overlays a known parametric density, such as a skewed-t distribution (Lopez-Salido and.
|
**A**:
As a starting point, Figure 1 shows histograms of the annualized quarterly inflation rate as well as fitted distributions over two sub-samples: from 1974Q1 to 1989Q4 and from 1990Q1 to 2022Q3.
**B**: For inflation, asymmetries and skewness are documented by Banerjee et al.
**C**: In this literature, QRs have emerged as workhorse tools to forecast the conditional percentiles of economic variables.
|
ABC
|
ABC
|
ACB
|
ABC
|
Selection 2
|
<|MaskedSetence|> (2014). <|MaskedSetence|> <|MaskedSetence|> Estimates are computed following the results from Section 4.2. Standard errors are obtained using a bootstrap clustered at the site level.
.
|
**A**:
The results in this table are based on data from the RCT in Behaghel et al.
**B**: We estimate coefficients from univariate regressions of site-level LATEs on an indicator for sites with a job seekers’ average employability above the median.
**C**: We also regress the private-program’s LATEs on an indicator for large international job placement firms, and an indicator for consultancies.
|
ABC
|
ABC
|
ABC
|
CAB
|
Selection 3
|
<|MaskedSetence|> While the root mean square error (RMSE) of the equilibrium regression is notably higher, the equilibrium leads to lower dispatch costs than the baseline regression. This asserts Corollary 1, stating that the equilibrium achieves the least-cost dispatch across the day-ahead and real-time markets on average. Interestingly, the equilibrium yields larger day-ahead costs, as it tends to withhold zero-cost wind power generation from the day-ahead market. However, it also features real-time costs that are ten times smaller than in the baseline case. <|MaskedSetence|> As a side benefit, the equilibrium regression also leads to a lower CVaR10%subscriptCVaRpercent10\text{CVaR}_{10\%}CVaR start_POSTSUBSCRIPT 10 % end_POSTSUBSCRIPT measure of dispatch cost, defined as the average cost across 10%percent1010\%10 % of the worst-case scenarios. These statistics are similar on training and testing datasets, highlighting the ability of the equilibrium regression to generalize beyond the training dataset.
We next study how the equilibrium drives regression model specification. <|MaskedSetence|> Although all wind farms are granted the same set of features, the equilibrium requires them to select features differently, depending on the wind farm’s position in the power grid. This result highlights the importance of coordinating feature selection to maximize private revenues, while simultaneously achieving cost-optimal dispatch.
.
|
**A**: As a result, the average cost error in equilibrium is half as small as under the baseline regression.
**B**: Table II summarizes the prediction and system outcomes under baseline and equilibrium regressions and contrasts them with the oracle.
**C**: Figure 2 shows the optimal feature selection for the baseline regression, identical for all producers, and the equilibrium selection.
|
BAC
|
BAC
|
BAC
|
CAB
|
Selection 1
|
This feature is consistent with observed test patterns in real practice. In our leading example, it is common for a contractor to adopt a simple pre-contract test where only some particular sources of complexity are examined. <|MaskedSetence|> If it passes, the contractor will undertake the project knowing that some other complexities may still exist, while the project owner will also approve in light of such a possibility.
Second, players’ pessimism affects the possibility of coordination asymmetrically. This argument is backed up by the finding that the project is launched with a positive probability when the principal is pessimistic but not when the agent is pessimistic. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: In the latter case, because of the players’ conflicted interests, the principal’s proposal will only make a pessimistic agent even more pessimistic, leading to no chance of launching the project.
**B**: In the former case, the principal can utilize test design to detect some lemons, making herself more optimistic when seeing a passed result.
.
**C**: If the project fails the simple test, the contractor will abandon it.
|
CAB
|
CAB
|
ABC
|
CAB
|
Selection 2
|
This work is closely connected to Calonico et al. <|MaskedSetence|> <|MaskedSetence|> This work also aligns with the broader literature on the intersection of RDD and panel data. Pettersson-Lidbom (2012) explores settings where RDD is combined with fixed effects to address small sample issues and violations of the continuous support assumption. Lemieux and Milligan (2008), use a first-difference RD approach to eliminate individual-specific fixed effects by capitalizing on the longitudinal nature of the Finnish Census data. Lastly, Cellini et al. <|MaskedSetence|>
|
**A**: Frölich and Sperlich (2019) has a section exploring the possibilities of an intersection between RDD and DiD however, the absence of a formal derivation of econometric properties leaves a gap in understanding, with only preliminary discussions on identification and estimation procedures.
**B**: (2014), whose contributions have been instrumental to robust nonparametric RDD estimation and whose estimation methods are used for our DiDC approach.
**C**: (2010) introduce \saydynamic RD models, accommodating scenarios with multiple treatment opportunities and looking into the dynamics of treatment effects.
.
|
CBA
|
BAC
|
BAC
|
BAC
|
Selection 4
|
<|MaskedSetence|> in [9] solve a dynamic model of households’ mortgage decisions incorporating labor income, house price, inflation, and interest rate risk. <|MaskedSetence|> The model highlights the fact that the default decision depends not only on the extent to which a borrower has negative home equity, but also on the extent to which borrowers are constrained by low current resources. In the model, constraints shift the threshold at which a borrower optimally decides to exercise the irreversible option to default.
The impact of rising income inequality on the overall level of debt in the system as well as the default rates was studied by Kösem in [10]. By developing an equilibrium-based model of the mortgage markets, they show that rising income inequality actually leads to lower amount of mortgage debt, but higher mortgage default rates. <|MaskedSetence|>
|
**A**: The model quantifies the effects of adjustable versus fixed mortgage rates, LTV ratios, and mortgage affordability measures on mortgage premia and default.
**B**:
Campbell et al.
**C**: This is due to a higher share of borrowers forced into risky low-value loans..
|
BAC
|
BAC
|
BAC
|
BCA
|
Selection 3
|
Computationally Equivalent Tasks for Cognitive Modeling.
Modeling human cognition is challenging because the hypothesis space of possible cognitive mechanisms that can explain human data equally well is vast. <|MaskedSetence|> However, human rationality has been a subject of debate in economics and psychology for over a century [42, 16, 21, 25, 37]. While significant progress has been made in understanding human rationality [e.g., 25, 15], the advent of LLMs seems to challenge the need for rational theories. Simply training LLMs to predict the next word appears sufficient to produce human-like behaviors, suggesting that we can model human cognition without the constraints imposed by rational theories. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: However, our experimental results suggest an alternative route to enhancing the correspondence between behaviors produced by LLMs and humans: pretraining on computationally equivalent tasks that a rational agent would need to master.
**B**: Future research should investigate the impact of different assumptions about the nature of rationality on task content and distributions, and explore whether there are more effective assumptions for pre-training models to explain human behavior.
Implications for Theories of Human Risk and Time Preferences..
**C**: This makes principles of rationality desirable, as assuming people are rational in some sense greatly constrains the hypothesis space, thereby making scientific inference more effective.
|
CAB
|
CAB
|
ABC
|
CAB
|
Selection 2
|
<|MaskedSetence|> (2020) and Cook et al. (2023), which has been utilized in existing studies in various fields (van der Laan, 2008). <|MaskedSetence|> (2022), and Degenne (2023). <|MaskedSetence|>
|
**A**: Our argument about the lower bound is inspired by the arguments in Kato (2024b), Komiyama et al.
**B**: The upper bound under the small-gap regime is derived using the results in Kato (2024a).
.
**C**:
When estimating the mean outcome, we employ the Adaptive Augmented Inverse Probability Weighting (A2IPW) estimator from Kato et al.
|
CAB
|
CAB
|
BCA
|
CAB
|
Selection 2
|
<|MaskedSetence|> (2024). Multi-dimensionality allows for applications with multiple Receivers (under public communication), potentially caring about different moments of the public belief. For another example, suppose that there are N+1𝑁1N+1italic_N + 1 primitive states of the world but a Sender only observes a partially revealing signal about the primitive state. The Sender sends a signal informative about her own posterior belief to a Receiver. As long as the Receiver maximizes expectation of a utility function that depends on the primitive state—by the law of iterated expectations—her payoff will only depend on the expectation of the Sender’s belief, which can be represented as an element of an N𝑁Nitalic_N-dimensional simplex.151515Arieli et al. (2023b) offer an analogous interpretation of the one-dimensional moment persuasion problem. Finally, moment persuasion captures information acquisition problems for certain well-behaved utility functions of the agent acquiring information (e.g., representing mean-variance preferences).
Weak duality for (multi-dimensional) moment persuasion can be established directly and is often sufficient to solve instances of persuasion problems. <|MaskedSetence|> <|MaskedSetence|> More substantially—due to strong duality—we are able to derive general predictions about the structure of solutions (Theorems 6, 7, 8, as well as Propositions 1 and 2 in the application in Section 5). In particular, strong duality implies that the complementary slackness conditions (C) must always hold; even if the optimal p𝑝pitalic_p is unknown, these conditions impose restrictions on the optimal persuasion scheme..
|
**A**: Pancs (2017), as well as a separable special case of Kolotilin
et al.
**B**: However, our approach has two distinct advantages.
**C**: First, by deriving duality for moment persuasion from the general case, we unify existing approaches (differing in the representation of the constraints in the moment persuasion problem), demonstrate how the dual variables in these alternative approaches relate to one another (Theorem 5), and extend them to the multidimensional case.
|
ABC
|
ABC
|
ACB
|
ABC
|
Selection 4
|
2.2 Model discussion
Several remarks are in order. First, we assume Poisson attention technologies throughout this paper, a modeling choice motivated by several considerations. <|MaskedSetence|> For instance, compare the distribution of posterior beliefs and decisions of a single decision maker using Poisson technologies (see Lemma 1) with those in Hu et al. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Given this similarity, exploring equilibrium attention networks generated by Poisson attention technologies seems a natural first step..
**B**: Beyond providing tractability, Poisson attention technologies in single-agent decision making problems yield predictions that are qualitatively similar to those generated by more general information acquisition technologies.
**C**: (2023), which solve the same decision problem assuming general posterior separable attention costs.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 1
|
We use the Add Health data to construct a cohort of clusters, each with a small number of adolescents. Then we apply our average direct effect (ADE) estimators under various interference specifications to this cohort. <|MaskedSetence|> <|MaskedSetence|> Therefore each cluster has a size of three. We also perform sub-sampling to ensure minimal overlaps among clusters and obtain 7905 clusters in total. <|MaskedSetence|> We measure academic performance by achieving a grade of B or better in mathematics, although the estimates are similar for other subjects.
We consider three interference specifications corresponding to the top three levels of interference structures in Section A.4. The first specification assumes that there is no interference, and thus an individual’s alcohol use and academic performance are not affected by his/her friends’ alcohol use. The second specification assumes that there is homogeneous interference, so that an adolescent’s alcohol use and academic performance are allowed to be affected interchangeably by their two best friends, regardless of the friends’ genders. The third specification assumes interference is heterogeneous, so that a best friend’s influence on an individual’s alcohol use and academic performance is potentially gender-dependent..
|
**A**: For each adolescent of our interest (which we refer to as the centroid of a cluster), we construct a cluster that consists of this adolescent, his/her best female friend, and his/her best male friend.
**B**: We construct the cohort using adolescents’ nominations of close friends.
**C**: We define regular alcohol consumption as drinking at least once or twice a week.
|
BAC
|
ABC
|
BAC
|
BAC
|
Selection 3
|
Data.
We empirically estimate the frequency and moments of price changes in scanner data from the supermarket chain Dominick’s.161616The data is provided by the James M. <|MaskedSetence|> As described in detail in Section A.4, we clean the data following Alvarez
et al. (2016), and in particular we focus on data from a single store. <|MaskedSetence|> The final data set contains weekly prices on 499 beer products (Universal Product Codes, henceforth UPCs), observed for an average of 76 weeks per UPC. The total sample size is n=37,916𝑛37,916n=\text{37,916}italic_n = 37,916. <|MaskedSetence|>
|
**A**: When computing standard errors, we treat the price changes as i.i.d. across UPCs and time..
**B**: Kilts Center, University of Chicago Booth School of Business.
**C**: Unlike those authors, we exclusively use data on beer products, which arguably increases the interpretability of the results and makes the sample size more relevant for our subsequent simulation study.
|
BCA
|
BCA
|
ABC
|
BCA
|
Selection 4
|
In practice, the most common approach to address interference of this type is clustering units at a higher-level at which there are no spillover effects (Baird et al. <|MaskedSetence|> <|MaskedSetence|> (2022) and Filmer et al. (2023), and incentives for suppliers or consumers in online two-sided markets
(Brennan et al. <|MaskedSetence|> 2024)..
|
**A**: 2018, Hudgens & Halloran 2008).
Examples include the evaluation of general equilibrium impacts of cash transfers in Banerjee et al.
**B**: 2022, Holtz et al.
**C**: (2021), Egger et al.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 3
|
Relative to these literatures, we make three contributions. <|MaskedSetence|> <|MaskedSetence|> We study a world in which regulators will have access only to parts of this information, for example a simplified representation of the credit scoring model. Given the complexity of machine learning and artificial intelligence tools, and potential limitations on the technical or legal reach of regulators, it is important to study optimal algorithmic regulation under informational constraints. <|MaskedSetence|> As in our Theorem 1, this empirical application shows that both a regulator and a lender can achieve privately preferred outcomes when the lender is permitted to use complex algorithmic credit scoring, rather than being constrained to simple and explainable models.
.
|
**A**: Second, existing analyses of algorithmic audits and other algorithmic regulation often assume that disclosure of all underlying algorithmic inputs (data, training procedure, and decision rule) is possible.
**B**: First, we offer a framework that nests many types of potential incentive misalignment between a developer of an algorithm and social planner, or between any algorithmic agent and a principal: these include a broad set of distributional objectives and fairness concerns, as well as, for example, diverging risk preferences.
**C**: Third, we provide empirical validation for our theoretical results in a real-world, economically important setting, where we examine the regulation of consumer credit scoring.
|
BAC
|
BAC
|
BAC
|
BCA
|
Selection 2
|
<|MaskedSetence|> Next, we introduced separable game transformations for multiplayer games, and define the properties (i) universally preserving Nash equilibrium sets and (ii) universally preserving best responses. <|MaskedSetence|> We showed that separable game transformations which universally preserve Nash equilibrium sets also universally preserve best responses. In the subsequent results, we derived further that if a separable game transformation universally preserves best responses then it is a positive affine transformation.
When faced with a strategic interaction
it can be highly beneficial to consider equivalent variations of it that are easier to analyze. <|MaskedSetence|>
|
**A**: In this paper, we shed light on why PATs have become the go-to transformation method for that purpose, reinforcing their standing as the standard off-the-shelf approach..
**B**: In this paper, we first gave hardness results about deciding whether a strategy constitutes a best response or whether two games have the same best-response sets.
**C**: It is well-known that PATs universally preserve Nash equilibrium sets.
|
BCA
|
BCA
|
BCA
|
CBA
|
Selection 1
|
<|MaskedSetence|> We refer to these Nature strategies as fully-myopic. <|MaskedSetence|> In principle, Nature could promise more information in the future in order to deter the buyer from buying in the present. However, we show that this does not occur in equilibrium. The reduction to fully-myopic information arrival allows us to characterize equilibrium, recovering traditional intuition about the form of equilibrum price-paths. In particular, we show that it implies the seller does not randomize in equilibrium except possibly in the initial period; from this observation, payoff uniqueness immediately follows.
Here is the broad intuition for why fully-myopic information arrival issequentially worst case. In the final period, nature’s information choice must have the property that any buyer who is not purchasing must be indifferent between purchasing or not. Otherwise another information structure can be found that lowers the probability of purchase and thus seller profit. <|MaskedSetence|> Now, anticipating this, the buyer in the second-to-last period makes purchase decisions as if there would be no future information. This buyer behavior in turn reduces Nature’s problem in the penultimate period to a static problem in which it seeks to minimize the probability that the buyer buys in that period. Thus Nature’s optimal strategy in the second-to-last period is also fully myopic, so on and so forth.
.
|
**A**: Because of this indifference property, the buyer’s expected payoff in the final period is the same as if no information were provided.
**B**: We emphasize that our model assumes Nature seeks to minimize the seller’s total discounted profit.
**C**: Formally, our first main result shows that the worst-case choices of Nature minimize the probability of sale within each period given the equilibrium price path.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 2
|
In other words: iterate.888In Section 2.3 of Gutin et al. (2023), we argue that the IDUA procedure is the matching market analog of the iterated deletion of dominated strategies for strategic games. <|MaskedSetence|> A player’s dominated strategy cannot be part of any solution because it is weak in that the player has an alternative strategy that is guaranteed to pay better no matter what. <|MaskedSetence|> While a dominated strategy should never be played, its close-but-distant relative, a dominant strategy, must be chosen by a rational player since it is always the best response. That is, dominant strategies are strong relative to all strategy profiles. <|MaskedSetence|> Clearly, in both non-cooperative and cooperative environments, the notion of “always strong” is a more demanding requirement than “always weak”.
Once the IDUA procedure stops – and it must – what remains is the normal form: a (sub)matching market that contains all the essential information of the original market in that the set of stable matchings remains unchanged (see Lemma 1).
Not only are smaller mathematical objects typically easier to analyse but, as we discuss in Section 3, knowledge of the normal form strengthens the rural hospitals theorem free of charge since, in addition to the well-known insights listed in Footnote 6, the normal form also reveals what worker-firms pairs are not part of any stable matching amongst those workers and firms that always feature..
|
**A**: In a matching market a blocking pair is always strong only if both members of the pair rank each other first.
**B**: The parallels seem natural to us.
**C**: That is, a dominated strategy is always weak and therefore every strategy profile containing it can be removed from consideration.
|
BCA
|
BCA
|
CAB
|
BCA
|
Selection 2
|
Intuition of the proofs
The proofs of Theorem 1 and 2 are inspired by an innovative technique introduced by Lee (2016). <|MaskedSetence|> <|MaskedSetence|> We use a similar technique to analyze desirable rankings.141414For another application of Lee’s technique to interviews in the NRMP medical residency market, see Echenique et al. <|MaskedSetence|> We first discuss some basic concepts from random graph theory that we will need..
|
**A**: (2001) on the size of bicliques in random bipartite graphs to analyze incentives in stable matching mechanisms.
**B**: (2022).
**C**: He uses a result from Dawande et al.
|
BAC
|
CAB
|
CAB
|
CAB
|
Selection 2
|
<|MaskedSetence|> (2017) and Canay
et al. (2021) employ asymptotic frameworks in which the number of clusters is fixed in the limit. Similarly, to model difference-in-differences studies involving few treated units, Conley and
Taber (2011) keep the number of treated units fixed even as the number of untreated units tend to infinity. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: To capture this, Canay and
Kamat (2017) analyze a permutation test for continuity in the distribution of baseline covariates under an asymptotic regime with a fixed number of observations on either side of the discontinuity..
**B**: Müller (2010), Canay
et al.
**C**: In the same vein, inference and specification testing for regression discontinuity designs typically involves few observations around the discontinuity.
|
BCA
|
BCA
|
BCA
|
BAC
|
Selection 1
|
<|MaskedSetence|> In the present paper, we focus on one particular aspect, namely the rate at which agents verify messages they receive. <|MaskedSetence|> These include direct ones, such as raising information literacy rates or publishing guides on how to spot fake news, as done by, e.g., The New York Times or Le Monde, as well as indirect ones, by investing in education in general. Our main question of interest is the verification rate that a benevolent planner, whose goal it is to maximize the proportion of correctly informed agents in society, would set. <|MaskedSetence|> Meaning, some rumor that could be eradicated is allowed to circulate as it “creates some truth”.
.
|
**A**: We uncover the conditions under which this rate also minimizes the diffusion of a rumor opposing the truth versus when it does not.
**B**:
The diffusion of information on social networks is a complex matter and various policies have been suggested to curb the spread of rumors.
**C**: Policy makers or online social platforms can influence agents’ incentives to verify through various channels.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 3
|
Central to our study is the observation that a modification to an agent’s decision problem alters her value for information through two channels. The first is the agent’s sensitivity to information–if her value for information increases (in the manner defined in this paper) it must be that she is more reactive to information. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> This is particularly stark when we modify the agent’s decision problem by adding a single action (§§\S§4.1): the agent’s value for information increases if and only if she becomes more sensitive to information. We uncover a simple geometric condition necessary and sufficient for this to transpire and show that an iterative version of this condition also guarantees an increase in the value of information when multiple actions are added. Moreover, although this condition is not necessary to make information more valuable when multiple actions are added, any failure of necessity is not robust–perturbing the utilities from the new actions slightly will make it so that the agent does not become more sensitive to information..
|
**A**: The second is the value to the agent of distinguishing between actions.
**B**: All in all, a transformation makes information more valuable if and only if the agent becomes more sensitive to information and the value of distinguishing between actions increases.
When the transformation to the decision problem is either the addition or subtraction of actions, the first channel is all-important.
**C**: This must also increase if the agent’s value for information is to increase.
|
ACB
|
ACB
|
ACB
|
CAB
|
Selection 2
|
<|MaskedSetence|> Hence, it is crucial to analyze the decision rule under the misspecification of the parameter space. We investigate the performance of the shrinkage rule and show that our results are robust to the misspecification of the parameter space. <|MaskedSetence|> <|MaskedSetence|> Similar to Stoye (2012), Tetenov (2012), Ishihara and.
|
**A**: Following Manski (2004, 2007), Hirano and
Porter (2009), Stoye (2009, 2012), and Tetenov (2012), we focus on the maximum regrets of statistical treatment rules.
**B**: To the best of our knowledge, this is the first study to consider the misspecification of the parameter space in the treatment choice problem.
Consequently, this study contributes to the growing literature on statistical treatment choice initiated by Manski (2000, 2004).
**C**: Kolesár (2021) consider the minimax estimation and inference problem for treatment effects and show that it is not possible to choose the parameter space automatically in a data-driven manner.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 2
|
<|MaskedSetence|> Such similarities are seen in human event-related potential (ERP) and functional magnetic resonance imaging (fMRI) studies (see the comprehensive review by [19]). <|MaskedSetence|> Studies using human event-related potential and functional magnetic resonance imaging (fMRI) have shown that the ACC is activated when subjects receive negative feedback after an inappropriate behavioral response. However, the crucial role played by LHb in monitoring negative outcome has evoked interest. <|MaskedSetence|> Therefore, the LHb and ACC may work together to monitor the negative outcome and alter subsequent choice behavior.
This can be understood by imagining how humans ceased living in trees and transitioned to bipedal walking. We have chosen survival strategies that develop the brain. Frogs, unlike monkeys and humans, preferred a different evolutionary strategy and aimed to improve by increasing the size of their brains. However, failure led to changing their strategy for survival by camouflaging, thus saving energy by reducing the size of their brains ([14]).
.
|
**A**: As the ACC sends projections directly to the LHb, both the structures can communicate through their reciprocal relationship.
**B**: Although they studied monkeys and the same experiments cannot be repeated on humans, earlier studies have shown strong similarities.
**C**: As [12] stated, the ACC is crucial for monitoring and adjustment.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> With two-way dependence, we no longer have independent blocks because each cluster can have observations that are dependent on observations from a different cluster when these observations share a cluster on a different dimension. Hence, a different proof strategy is required. The proof in this paper uses Stein’s method, which requires stronger moment restrictions, but provides a non-asymptotic bound on the approximation error — details are in Subsection 2.4. <|MaskedSetence|>
|
**A**: By using this strategy, a bounded fourth moment is required.
.
**B**:
Assumption 2(a) requires the fourth moment to be bounded, which is stronger than the moment condition in one-way clustering.555See Equation (7) of Hansen and Lee (2019) for the condition in one-way clustering.
**C**: The proof in one-way clustering usually verifies a Lindeberg condition because blocks of observations are independent of each other.
|
BCA
|
BCA
|
CBA
|
BCA
|
Selection 1
|
Let us stack all the above series in a VAR model as in equation (1). We use the methods described in Section 2 in order to obtain the causal network displayed in Figure 3. <|MaskedSetence|> <|MaskedSetence|> Scripts and data can be freely downloaded from the dedicated GitHub repository. <|MaskedSetence|>
|
**A**: This graph plots all bivariate, conditional Granger causal relationships among the variables described above.
**B**: The arrows represent a (directional) Granger causal link that was found at a significance level of 10%.888All the analyses reported in the following sections have been carried out using R (R Core Team,, 2020).
**C**: See the “Data Availability Statement”.
.
|
ABC
|
ABC
|
ABC
|
ACB
|
Selection 2
|
Rubin (2003), Lee (2009)) and covariate-assisted (Heiler (2022), Olma (2021), Semenova (2020)) bounds only focus on the continuous outcome distributions. To make progress, we invoke the concept of conditional value-at-risk (Rockafellar and
Uryasev (2000)) from the finance literature. This concept has been frequently used in related yet different causal inference contexts, including bounding the conditional-value-at-risk of CATE function (Kallus (2022a)), policy learning from a biased sample (Lei
et al. <|MaskedSetence|> Invoking a dual representation for the conditional value-at-risk, we represent trimming (Lee) bounds as a special case of aggregated intersection bound. <|MaskedSetence|> Unlike earlier approaches, the envelope score (1.2) does not involve any quantile calculations. Second, this paper introduces covariate-assisted version of Mourifié, Henry, and Meango (2020) bounds in Roy model with an exogenous instrument. <|MaskedSetence|>
|
**A**: Finally, the paper targets the optimal worst-case distributional welfare based on the treatment effect quantile.
.
**B**: In a special case when the outcome distribution is continuous, the envelope score (1.2) coincides with the debiased moment function derived in Semenova (2020).
**C**: (2023)) and offline reinforcement learning (Bruns-Smith and
Zhou (2023)).
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 1
|
More formally, the distribution of individual-specific IQR can also be calculated, thanks to equation (21) in Theorem 2. For the majority of the population (contained between the 10-th and the 90-th percentile) the estimated values range between 0.8 and 1.6 times the average perceived wage in the public-sector. These results confirm the high prevalence of uncertainty in the population. <|MaskedSetence|> Only perceived amenities vary.181818The wage is set at the individual-specific average perceived wage in the private sector. The survey missed a question about the number of weekly working hours expected in the public or the private sector. The number of working hours in the public sector is set to 41.4 and the working hours for the private sectors are set to 45.8 according to Christiaensen and Premand, (2017). The average working time for salaried workers as calculated from the publicly available Ivorian labour force survey (ERI-ESI 2017) is 48 hours. Comparable numbers for other countries of francophone west Africa range between 45 and 51 hours. The distributions are somewhat flatter, reflecting the existence of dispersed beliefs. <|MaskedSetence|> Close to 15 percent of the population perceives amenities larger than the average wage in the public sector.191919Mangal, (2024) uses a sample of 147 candidates preparing for competitive exams for government jobs to infer a lower bound on the total value of a government job, including amenities. <|MaskedSetence|> The above estimates of median returns are lesser in magnitude and provide a more nuanced picture.
.
|
**A**: He finds that the amenity value of a government job in Pune, India, comprises at least two-thirds of total compensation.
**B**: It implies that the job-seekers have a high value of waiting and collecting more information on the sectors instead of committing ex ante to one sector (on the option-value arising from colecting information, see, for example, Gong et al.,, 2020; Méango and Poinas,, 2023).
Figure F.2 in Appendix F shows the distributions for perceived public-sector amenities: wages are set to be equal in both sectors so that there is no pecuniary gain from being in the private sector.
**C**: At the median returns, about 6 out of 10 students perceive positive amenities from being in the public sector.
|
ACB
|
BCA
|
BCA
|
BCA
|
Selection 3
|
<|MaskedSetence|> Our findings suggest that a principal possessing an advantage over its competitor can derive protection from competition, with the protective effect intensifying as the level of competition increases. Notably, this protection effect yields a tax rate p𝑝pitalic_p that is significantly higher than zero in the region of pure competition, thereby enhancing the profit of the advantaged principal without concern for competitive pressures from its rival. <|MaskedSetence|> Our results indicate that the primary driver of the disparity lies in the presence of multiple principals, whose interests exhibit varying degrees of alignment. <|MaskedSetence|> Furthermore, our findings shed light on the mechanisms underlying the diminished overall welfare of a party afflicted by intra-group conflicts of interest, which arise from multi-sided information asymmetry..
|
**A**:
In this study, we employ a symmetric duopoly framework featuring a principal-agent relationship, and subsequently conduct a comprehensive robustness analysis to account for heterogeneity among principals.
**B**: Furthermore, in the region of pure collusion, the two principals divide the revenue from both contracts equally, which creates an incentive for both principals to encourage the agent to exert effort on the project of the advantaged principal.
We devised a series of experiments and simulations to disentangle the competing explanations for the observed phenomenon.
**C**: This force operates distinctly in standard contract and dual-contract problems, respectively.
|
ABC
|
ABC
|
CAB
|
ABC
|
Selection 2
|
<|MaskedSetence|> Kline and Santos (2013) perform a sensitivity analysis in a different context: departures from a missing (data) at random assumption. <|MaskedSetence|> Our quantile breakdown frontier builds on the breakdown frontier introduced by Masten and Poirier (2020). However, instead of relaxing two parameters, we relax just one, and plot it against different quantiles. <|MaskedSetence|>
|
**A**: In a manner similar to us, this departure is measured as the Kolmogorov-Smirnov distance between the distribution of observed outcomes and the (unobserved) distribution of missing outcomes.
**B**: Another recent application of the breakdown analysis is Noack (2021) in the context of LATE.
.
**C**:
Our sensitivity analysis is based on the breakdown analysis of Kline and Santos (2013) and Masten and Poirier (2020).
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 1
|
Figure S.9: Direct CO2 emissions from electricity generation. <|MaskedSetence|> <|MaskedSetence|> In these scenarios, this is a direct consequence of additional electricity generation from natural gas, and to a smaller extent from lignite and hard coal in neighboring countries. Among the two hydrogen cases, on-site electrolysis at filling stations leads to lower emission impacts, as its temporal flexibility restrictions in combination with the assumed transmission capacities limit the possibility of increasing imports or generation from natural gas plants; instead, additional renewable energy, combined with long-duration electricity storage, is used (compare section 2.2). <|MaskedSetence|> The latter is driven by an additional expansion of solar PV facilitated by V2G. For neighboring countries, relative emission effects are smaller, as by assumption, they have lower renewable energy shares and, in turn, higher emissions, as well as no electrified truck fleets.
.
|
**A**: The chart displays emissions from Germany, its Neighbors and the two together.
**B**: Emission effects are smaller for BEV and ERS-BEV, and even negative for flexibly charged BEV, especially if combined with V2G.
**C**: Every group contains two subplots; the subplot on the left shows the overall emissions of the reference without electrified HDV, and the right-hand side subplot shows differences between HDV scenarios and the reference.
Among all scenarios, CO2 emissions increase the most if the HDV fleet uses e-fuels or hydrogen (Figure S.9).
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 3
|
Koijen (2020, 2021); Camanho
et al. <|MaskedSetence|> <|MaskedSetence|> Typically, latent factors are estimated using principal components on the demeaned outcome variable before being included as control variables in the GIV regression. Such an approach is attractive because it enables practitioners to apply GIV to applications where the correlations between unit shocks are driven by a small number of latent factors. <|MaskedSetence|>
|
**A**: (2022); Baumeister and
Hamilton (2023); Adrian
et al.
**B**: These estimated factors however are subject to measurement error, so their inclusion as control variables in subsequent GIV regressions give rise to attenuation bias.
.
**C**: (2022) among others).
|
ACB
|
ACB
|
BCA
|
ACB
|
Selection 1
|
<|MaskedSetence|> This suggests that when the sample size (and the number of goods K𝐾Kitalic_K, since computation time increases linearly in K𝐾Kitalic_K for all these methods) is large, our method will be much more efficient in computation while maintaining reasonably large power.
We now turn to the standardized welfare loss bounds in Table 2. <|MaskedSetence|> i) All the upper bounds shrink towards the true value of the loss as the sample size increases. <|MaskedSetence|> iii) The standardized welfare loss bounds under ξ𝜉\xiitalic_ξ are larger than those under the other three statistics, but the differences are small and shrink towards zero as the sample size increases.
.
|
**A**: We have the following three findings.
**B**: ii) Information from data does improve the bounds compared to bounds without using data, as the bounds under constraint i) (or ii)) are smaller than those under constraint iii) (or iv)).
**C**: In terms of computation time, on the other hand, our method is about 900 times faster than the other three.
|
BAC
|
CAB
|
CAB
|
CAB
|
Selection 3
|
Text crops are thin vertically or horizontally oriented rectangles, with highly diverse aspect ratios, whereas vision encoders are almost always trained on square images. <|MaskedSetence|> To preserve the aspect ratio, we pad the rectangular crop with the median value of the border pixel such that the text region is always centered.
For language-image pretraining, the standard CLIP loss was used to align the image and text encoders Radford et al. (2021). Supervised Contrastive loss Khosla et al. (2020) was used for all supervised training. We used the AdamW optimizer for all model training along with a Cosine Annealing with Warm Restarts learning rate (LR) scheduler where the maximum LR was specified for each run and the minimum LR was set to 0. <|MaskedSetence|> An epoch involved sampling m𝑚mitalic_m views (image-text pair) of each label in the dataset and going through each of them once. <|MaskedSetence|>
|
**A**: It took a 24 hours to perform language-image pretraining, 10 hours to perform supervised training using synthetic data and 40 minutes to train on the labelled data - a total training time of 34.6 hours to train CLIPPINGS on a single A6000 GPU card.
Hyperparameters and additional details about model training are listed in Table S-1..
**B**: 10 steps were chosen for the first restart with a restart factor of 2 that doubles the time to restart after every restart.
**C**: Center cropping would discard important information and resizing often badly morphs the image, given that the underlying aspect ratios are far from a square.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 1
|
For instance, Liao
et al. (2023) adopted high-dimensional linear models to examine the double descent phenomenon in economic forecasts. <|MaskedSetence|> equity premium, U.S. unemployment rate, and countries’ GDP growth rate. <|MaskedSetence|> As a result, it is improbable that (1) holds in these applications. As another example, Spiess
et al. (2023) examined the performance of high-dimensional synthetic control estimators with many control units. The outcome variable in their application is the state-level smoking rates in the Abadie
et al. (2010) dataset. <|MaskedSetence|> states, it is unlikely that the regression errors underlying the synthetic control estimators adhere to (1). In short, it is desirable to go beyond the simple but unrealistic regression error assumption given in (1).
.
|
**A**: As in their applications, economic forecasts are associated with time series or panel data.
**B**: In their applications, the outcome variables include S&P firms’ earnings, U.S.
**C**: Considering the geographical aspects of the U.S.
|
BAC
|
BAC
|
ABC
|
BAC
|
Selection 2
|
A theory for semi-parametric estimation of (1) is thus of interest. <|MaskedSetence|> In this work, we combine the uniform sieve estimation framework in Chen and Christensen, (2015) with the structural nonlinear time series model of Gonçalves et al., (2021), integrating them within a physical dependence setup (Wu,, 2005).
Under appropriate regularity assumptions, we show that a two-step semiparametric series estimation procedure is able to consistently recover the structural model in a uniform sense. This allows us to further prove that nonlinear impulse response function estimates are asymptotically consistent and, thanks to an iterative algorithm, straightforward to compute in practice.
The core contribution of this work is the application and analysis of semiparametric estimation with regards to impulse response analysis. <|MaskedSetence|> They have been and are commonly used in time-homogeneous models. <|MaskedSetence|>
|
**A**: Parametric nonlinear specifications are common prescriptions, for example, in time-varying models (Auerbach and Gorodnichenko,, 2012, Caggiano et al.,, 2015) and state-depend models (Ramey and Zubairy,, 2018).
**B**: Moreover, since we ultimately wish to compute impulse responses for the model, such theory should also provide valid asymptotic guarantees for estimated IRFs.
**C**: Kilian and Vega, (2011) provide a structural analysis of potentially nonlinear effects of GDP on oil price shocks..
|
BAC
|
BAC
|
BAC
|
CAB
|
Selection 1
|
<|MaskedSetence|> This restriction is necessary for yitsubscript𝑦𝑖𝑡y_{it}italic_y start_POSTSUBSCRIPT italic_i italic_t end_POSTSUBSCRIPT
to have a fixed mean irrespective of whether ϕi=1subscriptitalic-ϕ𝑖1\phi_{i}=1italic_ϕ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 1 or |ϕi|<1subscriptitalic-ϕ𝑖1\left|\phi_{i}\right|<1| italic_ϕ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | < 1. If αisubscript𝛼𝑖\alpha_{i}italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is unrestricted, a linear trend
is introduced in yitsubscript𝑦𝑖𝑡y_{it}italic_y start_POSTSUBSCRIPT italic_i italic_t end_POSTSUBSCRIPT when ϕi=1subscriptitalic-ϕ𝑖1\phi_{i}=1italic_ϕ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 1. <|MaskedSetence|> We impose
the restriction since we will be considering a mixture of processes with and
without unit roots. <|MaskedSetence|>
|
**A**: The restriction on αisubscript𝛼𝑖\alpha_{i}italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is not binding when |ϕi|<1subscriptitalic-ϕ𝑖1\left|\phi_{i}\right|<1| italic_ϕ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | < 1.
**B**: where the fixed effects, αisubscript𝛼𝑖\alpha_{i}italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, are restricted, αi=μi(1−ϕi)subscript𝛼𝑖subscript𝜇𝑖1subscriptitalic-ϕ𝑖\alpha_{i}=\mu_{i}\left(1-\phi_{i}\right)italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_μ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( 1 - italic_ϕ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ).
**C**: With αi=μi(1−ϕi)subscript𝛼𝑖subscript𝜇𝑖1subscriptitalic-ϕ𝑖\alpha_{i}=\mu_{i}(1-\phi_{i})italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_μ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( 1 - italic_ϕ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ), (2.1).
|
BAC
|
BAC
|
BCA
|
BAC
|
Selection 4
|
The IPW approach integrates individual claims information in a simple, yet statistically justified manner. It maintains the practicality and interpretability characteristic of macro-level models, rendering it a more attractive choice for both practitioners and regulators. <|MaskedSetence|> <|MaskedSetence|> Section 3 extends the methodology to consider other types of reserves, such as the incurred but not reported (IBNR) and the reported but not settled (RBNS) reserves, as well as their cumulative payments counterparts. <|MaskedSetence|> Section 5 discusses how to estimate the inclusion probabilities. Section 6 provides a numerical study on a real insurance dataset. Lastly, Section 7 provides the conclusion and future research directions.
.
|
**A**: Ultimately, it paves the way for practitioners and regulators to consider tailored-made models based on micro-level techniques.
This paper is structured as follows: Section 2 introduces the reserving problem as a sampling problem, and shows the derivation of the IPW estimator for the reserve of all outstanding claims.
**B**: This approach may serve as an initial step to encourage practitioners, who typically rely on macro-level models, to explore the potential benefits and insights obtained from incorporating individual information in the reserving process.
**C**: Section 4 shows how the Chain-Ladder and some extensions can be viewed as IPW estimators, and discusses its implications.
|
BAC
|
BAC
|
ACB
|
BAC
|
Selection 4
|
The remainder of the paper is organized as follows. In Section 2 we describe our setup and notation. In particular, there we describe the precise sense in which we require that units in each pair are “close” in terms of their baseline covariates. Our main results concerning the Wald estimator are contained in Section 3. In Section 4, we develop results pertaining to our covariate-adjusted estimator that exploits additional observed, baseline covariates not used in pairing units. In Section 5, we examine the practical relevance of our theoretical results via a small simulation study. <|MaskedSetence|> Finally, we conclude in Section 7 with some recommendations for empirical practice guided by both our theoretical results and our simulation study. <|MaskedSetence|> When there are additional, observed, baseline covariates that are not used when forming pairs, we recommend the use of our covariate-adjusted Wald estimator with our consistent estimator of its limiting variance. <|MaskedSetence|>
|
**A**: As explained further in that section, we do not recommend the use of the Wald estimator with the conventional heteroskedasticity-robust estimator of its limiting variance because it is conservative in the sense described above; we instead encourage the use of the Wald estimator with our consistent estimator of its limiting variance because it is asymptotically exact, and, as a result, can be considerably more powerful.
**B**: In Section 6, we provide a brief empirical illustration of our proposed tests using data from an experiment in Groh and McKenzie (2016).
**C**: Proofs of all results are provided in the Appendix.
.
|
BAC
|
BAC
|
BAC
|
BAC
|
Selection 2
|
On average, local LSTM models consistently perform worse than all three econometrics baselines. However, upon closer inspection of the MCS, we found that local LSTM is more frequently included in the SSM compared to GARCH and GJR models. This is in line with our initial hypothesis, indicating that the effectiveness of local DL models is heavily influenced by the characteristics of the evaluation dataset. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> This result may explain why numerous studies suggest the superiority of DL models over econometrics models, while the econometrics community remains skeptical. In conclusion, our research does not find conclusive evidence that local DL models consistently outperform their econometrics counterparts.
•.
|
**A**: In contrast, econometrics models perform more consistently across different stocks.
**B**: Hence, even though local LSTM models tend to have lower average predictive accuracy, they have a higher chance of being included in the SSM when they do perform well.
**C**: While local DL models may excel for certain stocks, they may perform poorly with others.
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 3
|
<|MaskedSetence|> For active control estimators, signal monotonicity and treatment-group neutrality correspond to the canonical IV monotonicity condition (Imbens and Angrist,, 1994; Angrist et al.,, 2000). In particular, agents’ posterior features are larger when provided a high signal as opposed to a low signal. For passive control estimators, signal monotonicity and control-group stability correspond to “weak” IV monotonicity, which allows the direction of IV monotonicity to vary by covariates (Słoczyński,, 2020; Blandhol et al.,, 2022). In particular, agents’ posterior features are larger when provided a signal above their prior feature, and smaller when provided a signal below their prior feature. Finally, the contamination that we find in some passive control estimators is an instance of TSLS specifications failing to be “monotonicity correct” in their first-stage (Blandhol et al.,, 2022).
Haaland et al., (2023) survey applications of information provision experiments, and give guidance on experimental design, belief elicitation techniques, and other technical challenges. In contrast, we develop theory for the identification and estimation of causal effects. The closest to our paper in this regard is the independent and concurrent work of Balla-Elliott, (2023), who likewise studies the interpretation of TSLS estimators in information provision experiments. <|MaskedSetence|> To identify this APE, Balla-Elliott, (2023) restricts heterogeneity in the feature value margin, and appeals to the structure of (i) active control comparisons; and (ii) linear updating of expectations. In contrast, since we primarily seek to characterize existing specifications, we achieve identification under weaker conditions on agents’ actions, in both passive and active control experiments, and for more general learning environments. <|MaskedSetence|>
|
**A**: Balla-Elliott, (2023) considers the partial effects of expectations, and targets an APE that places equal weights across agents.
**B**: Finally, while Balla-Elliott, (2023) also interprets TSLS specifications from the literature, we discuss and characterize a more comprehensive set of specifications in a more general framework.
.
**C**: Our proposed conditions correspond to notions from the IV literature.
|
CAB
|
CAB
|
ABC
|
CAB
|
Selection 2
|
There is an extensive body of literature on conditional density estimation. Kernel-based methods have constituted the traditional approach; however, their poor finite-sample performance under many covariates has triggered the developments of bandwidth selection methods and alternative techniques. To name a few, Hall et al. <|MaskedSetence|> Efromovich (2007) studies adaptive optimal rates using orthogonal polynomials for one-dimensional continuous variables. Shen and Ghosal (2016) studies the frequentist properties of nonparametric Bayesian models, focusing on adaptive density regression in high-dimensional settings; they develop priors based on orthogonal polynomials and splines, achieving adaptive optimal contraction rates. In a similar setting, Norets and Pati (2017) and Norets and Pelenis (2022) use priors based on mixtures of regressions, addressing the issue of irrelevant covariates. In addition, kernel-based estimators also suffer from boundary bias problems and, in this regard, series and local polynomial techniques constitute effective solutions: see, e.g., Cattaneo et al. (2023a) and the references cited therein.
This paper contributes to this literature but differs from the aforementioned references in that it does not examine the effect on the convergence rate of the smoothness of f(y|x)𝑓conditional𝑦𝑥f(y|x)italic_f ( italic_y | italic_x ) with respect to x𝑥xitalic_x. This limitation commonly arises in the study of the asymptotic properties of forest-based estimators, as in Wager and Athey (2018) and Athey et al. (2019), although there are recent notable exceptions such as Mourtada et al. (2020) and Cattaneo et al. (2023b). Thus, this paper is closely related to recent studies that incorporate machine learning techniques into the development of conditional density estimators such as Dalmasso et al. <|MaskedSetence|> <|MaskedSetence|> (2019)’s forest design and the establishment of desired asymptotic properties: uniform consistency, asymptotic normality, and a valid standard error formula. Additionally, the use of a series method ensures that the uniform consistency result applies to the entire support, preventing the proposed estimator from experiencing boundary bias problems..
|
**A**: (2004) develop a cross-validation bandwidth selection method that is able to distinguish relevant from irrelevant covariates.
**B**: (2020) and Gao and Hastie (2022).
**C**: The distinguishing aspect of my paper, compared to these articles, is the use of Athey et al.
|
ABC
|
ABC
|
BAC
|
ABC
|
Selection 4
|
<|MaskedSetence|> For example, in the above example, even if the researcher controls for observable individual characteristics such as gender, age, race, and parents’ education, it is likely to omit factors that influence both students’ choice of friends and their choice of college major (e.g.; family expectations, effort, motivation, psychological disorders, or unreported substance use.). <|MaskedSetence|> <|MaskedSetence|> That is, in practice, when a researcher observes a link between two agents, they often have similar social characteristics. Therefore, network data can potentially be used to account for unobserved social influences.
.
|
**A**: In the literature, a common approach to addressing this problem is to collect network data under the assumption that the latent social influence is revealed by linking behavior in the network (Blume et al.,, 2011; Graham,, 2015; Auerbach,, 2022).
**B**:
In practice, these social influences are not observed by the researcher, making it hard to include them as a covariate in the model.
**C**: These unobserved factors affect both student’s choice of friends and the choice of a specific field.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> Athey and Wager (2021) and Zhou, Athey, and Wager (2023) analyze efficient estimation of targeted policies with constraints on the policy class. Hitsch, Misra, and Zhang (2023); Knaus, Lechner, and Strittmatter (2022); Yang et al. <|MaskedSetence|> An early article to do so is Ascarza (2018), which shows that targeting a customer retention program based on predicted churn performs considerably worse than targeting based on estimated treatment effects. Subsequent studies in marketing have similar findings (e.g. Devriendt, Berrevoets, and Verbeke, 2021)..
|
**A**: (2021) proposes metrics for assessing the benefits of targeting similar to those applied in this article.
**B**: Yadlowsky et al.
**C**: (2020), among others, carry out empirical studies analyzing targeted policies derived using machine learning methods.
Zhang and Misra (2022) proposes a two-step procedure involving treatment-effect estimation and discretization to decide which treatments to offer to whom.
Until recently, there was little empirical evidence about the relative performance of predictive and causal targeting.
|
BAC
|
BAC
|
BAC
|
CAB
|
Selection 1
|
Such distortion functions feature raising probabilities to a power, as in models of base-rate neglect (e.g., Benjamin et al., 2019) as well as power probability weighting functions used in rank-dependent utility or prospect theory (e.g., Diecidue et al. (2009)). <|MaskedSetence|> <|MaskedSetence|> Here, we consider a distinct approach considered in the literature to belief distortion — individuals distort their beliefs about experiment probabilities (which features in models such as Möbius et al. (2022) and Caplin and Leahy (2019)). In other words, they implicitly alter the chance of a signal, conditional on a given state of the world, as a way of altering the posterior beliefs conditional on a signal. We define Blackwell signal coherency as the idea that the updating with respect to a set of signals should commute with respect to distortion. <|MaskedSetence|>
|
**A**: Thus, our notion of coherency is consistent with familiar functional forms from several distinct approach to distorted beliefs.
Section 3 initiates a study of learning given signal realizations, modeled as Blackwell experiments.
**B**: The results here are largely a reinterpretation of the results in Section 2, and we obtain a similar functional form restriction: the distorted signal probabilities (conditional on a state of the world) must be power-weighted..
**C**: However, they also allow for the probabilities of the states to be higher or lower depending on the identity of the state, as in models of motivated beliefs, like Mayraz (2019).
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> Importantly, from the point of view of financial policy, a theory of mind AI should be able to understand and provide context for policy decisions, which today’s AI cannot do, and hence be invaluable in crisis resolution. AI that understands how its interlocutors think can explain its decisions to them and reason about how they will react to its actions and plan accordingly.
And finally, artificial general intelligence or AGI, which is still purely theoretical while often hypothesised. It would use existing knowledge to learn new things without needing human beings for training and being able to perform any intellectual task humans can. <|MaskedSetence|>
|
**A**: We then have a theory of mind, not yet realised, AI that aims to understand thoughts and emotions and consequently simulate human relationships.
**B**: They can reason and personalise interactions.
**C**: If it reaches that level, increased computer capacity will allow it to surpass humans.
.
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 3
|
The remainder of this paper is organised as follows. Section 2 introduces our model. Section 3 discusses the experimental design. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Section 7 concludes. The Online Appendix contains the proofs of our main results, as well as additional details on the semiparametric estimator.
.
|
**A**: Section 6 presents the results of our empirical analysis.
**B**: Section 5 discusses estimation.
**C**: Section 4 presents our target estimands and the proposed semiparametric model.
|
CBA
|
CBA
|
CBA
|
ABC
|
Selection 2
|
<|MaskedSetence|> Why? In a low-TFP gain economy, equilibrium labor intensity is higher in the unenclosed sector, so each unit of enclosed land employs fewer workers than had operated on the same unenclosed plot. <|MaskedSetence|> In contrast, in the high-TFP gain economy, each new enclosure shifts the marginal product by enough to augment the amount of labor used on the plot. <|MaskedSetence|> This raises the wage that must be paid in the enclosed sector, but this in turn reduces the returns to further enclosure.
.
|
**A**: The displacement or release of labor to the rest of the economy lowers the market equilibrium wage, raising overall labor utilization on enclosed (and unenclosed) land, which raises the market equilibrium rental rate and the return to further private enclosures.
**B**: This draws in labor and also reduces crowding on unenclosed land.
**C**:
Propositions 3 and 4 differ sharply in their characterizations of enclosure equilibria.
|
CAB
|
CAB
|
BAC
|
CAB
|
Selection 2
|
<|MaskedSetence|> Firms near areas with high nominal income, i.e., large markets, are able to pay higher nominal wages. Similarly, firms close to regions with high price indices (as discussed below, a high price index implies weak competition, leading to higher nominal income in that region) can pay higher nominal wages.888Similar interpretation is given in Fujita et al. <|MaskedSetence|> <|MaskedSetence|> Furthermore, we can also see simple labor market consequence where nominal wages are affected by the relative number of firms and workers present in the region.
.
|
**A**: (1999, p.53).
**B**: These effects are discounted by transport costs.
**C**: (19)
Intuitively, this equation can be interpreted as follows.
|
ACB
|
CAB
|
CAB
|
CAB
|
Selection 2
|
<|MaskedSetence|> I extend Wald’s (1947) widely used sequential sampling model to a strategic situation. <|MaskedSetence|> <|MaskedSetence|> To focus on stopping decisions, the model features public and exogenous signals so that players can only choose when to stop acquiring information but not what information to acquire.
In my model, players’ strategic interaction is determined by a collective stopping rule that specifies how many or which players must stop in order for collective information acquisition to end. For simplicity, I focus the bulk of my analysis on the two player case, deferring more than two players to an extension in Section 4.3. With two players, I consider two stopping rules: unilateral stopping where information acquisition can be unilaterally terminated by either player, and unanimous stopping where the termination requires unanimity of two players. Under either rule, players face constrained optimal stopping problems where the constraint is determined by the other’s stopping strategy.
.
|
**A**: When information acquisition ends, players get terminal payoffs that depend on their common posterior belief about the state.111A leading situation is where players obtain ex post payoffs depending on the underlying state, and thus their expected payoffs depend on the posterior belief.
**B**: Players collectively determine when to stop acquiring costly public signals about a binary state of the world.
**C**: This paper presents a framework for analyzing stopping decisions in collective dynamic information acquisition.
|
CBA
|
CBA
|
ABC
|
CBA
|
Selection 4
|
In each of these scenarios, an agent-informed oracle is potentially very useful to the sender but also costly to invoke. It is therefore important to understand how to employ them effectively and efficiently, and how to quantify their benefit. <|MaskedSetence|> Complicating the situation, the space of potential queries can be enormous and the information provided by one query can complement revelations from previous queries. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: It is therefore crucial to understand the algorithmic problem of computing optimal (or near-optimal) adaptive query sequences for the sender.
We focus on a BP setting in which the state and action set are binary and the sender’s utility is state-independent.
(Both assumptions are common special cases, see, e.g., Parakhonyak and Sobolev (2022); Kosterina (2022); Hu and Weng (2021); Kolotilin et al.
**B**: A sender with a generative AI query budget must understand which potential query (or queries) will produce the greatest benefit; a seller debating whether to commission more rounds of market research should calculate whether the expected benefits outweigh the costs.
**C**: (2017).).
|
BAC
|
BAC
|
BCA
|
BAC
|
Selection 2
|
Using this dataset, we apply four distinct filters to “directly” estimate wash trading in NFT markets. Filter 1 targets the simplest form of wash trading by flagging transactions where the buyer and seller are using the same wallet address, effectively identifying cases where an individual is blatantly selling an NFT to themselves. Filter 2 steps up in complexity by detecting back-and-forth trades, activated when buyer and seller identities are inverted for the same NFT in sequential transactions, pointing to a scenario where a trader uses two accounts to repeatedly trade the same NFT. <|MaskedSetence|> Finally, Filter 4 identifies cases with common upstream buyer and seller wallet addresses, indicative of a single entity controlling both sides of a transaction, such as when the same wallet funds both the initial and subsequent purchases of an NFT. These filters yield a conservative, yet likely precise, estimate of wash trading. While one could theoretically devise more complex obfuscation strategies to evade these filters, higher complexity implies higher implementation costs, which would naturally restrict their use in practice. <|MaskedSetence|> Accordingly, we refer to our methodology as the “direct” estimation method in this study. <|MaskedSetence|>
|
**A**: Filter 3 goes further by flagging instances where the same buyer acquires the same NFT three or more times, targeting wash traders cycling an NFT through multiple accounts, with the Filter triggering when the NFT completes at least three such cycles.
**B**: This crucial aspect reinforces the credibility of our estimate.
**C**: The details are discussed in §4.
Indirect Estimation.
|
ABC
|
ABC
|
BAC
|
ABC
|
Selection 4
|
<|MaskedSetence|> Similar to Hollingworth et al. (2022), we not only include individual-level covariates like gender, age, ethnicity, and parental education, but also certain state-level control variables such as unemployment rate and median income, fixed at pre-treatment levels.161616State-level unemployment rates as well as the state median income were constructed from data of the Bureau of Labor Statistics (https://www.bls.gov/). Finally, we employ state-level marijuana prices from 2015 as an instrument for the consumption (state_price_m), and a “survey cooperation” measure (trust) for the reporting, following Greene et al. (2018) and Brown et al. (2018).171717Marijuana price data source:
https://oxfordtreatment.com/substance-abuse/marijuana/average-cost-of-marijuana/ (last accessed: 12-10-2023). <|MaskedSetence|> <|MaskedSetence|> On the other hand, “survey cooperation”, defined as the proportion of missing survey responses (based solely on questions common to all questionnaire forms and not directly related to drug use), is assumed to provide an indication of the respondents’ willingness to cooperate in the survey, unrelated to marijuana consumption itself, as the reporting decision occurs post-consumption only.
Table 4 presents descriptive statistics of the outcome variable and other controls. As expected, marijuana consumption among 8th graders appears relatively low, with only 6% of the sample reporting usage over the past 30 days. Further, unreported results indicate that these characteristics remain largely stable across time within each group..
|
**A**: Here, marijuana prices are presumed to exclusively influence consumption decisions.
**B**: The plausibility of the exclusion restriction relies on the assumption that consumption behavior is sensitive to prices, while reporting behavior is not, given that we also control for other state-level variables.
**C**: The outcome variable of interest (cons_30days), based on the survey responses to the question “On how many occasions have you used marijuana (weed, pot) or hashish (hash, hash oil) … in the last 30 days?”, is categorized into three levels: “0” (never used), “1” (used 1-2 times), and “2” (used more than 2 times).
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 4
|
<|MaskedSetence|> We
consider the same setting of economic exchange with no aggregate uncertainty, as in
our second result; but we strengthen the assumptions on preferences to focus on
utilities with multiple priors. <|MaskedSetence|> <|MaskedSetence|> The stronger is the level of Pareto improvement, the smaller will be the size of the prior beliefs set. This means that at least one of the two agents must, in some sense, be close to being ambiguity neutral.
.
|
**A**: Think, to fix ideas, of the max-min expected utility preferences of Gilboa and Schmeidler (1989).
**B**: Now we prove that, if the risk sharing agreement between the two agents can be improved in the Pareto sense, then at least one of the two must have a small set of prior beliefs.
**C**: Our third main result (Theorem 3) is not explicitly about welfare, but instead about the attitudes towards uncertainty by agents who engage in efficient risk sharing.
|
CAB
|
CAB
|
CAB
|
BCA
|
Selection 1
|
Income effects are the heart of the difficulty in establishing uniqueness, since compensated demand (the substitution effect) is downward sloping under general conditions. <|MaskedSetence|> And if, due to heterogeneity, individual demand inflection points occur across a large price range, market demand can take many shapes and intersect supply at many prices.
The discussion of this section illustrates the tight link between the questions of aggregation and uniqueness.111111See [141, ] and the corresponding 1999 special issue of the Journal of Mathematical Economics on demand aggregation. If a heterogeneous agent economy is equivalent to an economy with a representative agent with standard preferences, then equilibrium will be unique because there is no scope for Giffen behavior at the equilibrium. When you cannot construct a standard representative agent, uniqueness is still possible, but the problem becomes more difficult because you cannot appeal to properties of a single agent’s demand function. <|MaskedSetence|> The result appears in [140, ] but without citation, and it seems to have been known for a long time. <|MaskedSetence|>
|
**A**: Even with relatively strong assumptions on utility, individual excess demands can be non-monotonic since price changes affect wealth.
**B**: The [135, ] representative agent is the most famous example of aggregation, but the proof we provided above (adding across Euler equations with common utility weights) also works in two other cases, heterogeneous quadratic Bernoulli utility121212The authors of this survey learned the quadratic result in John Geanakoplos’ 2008 graduate micro theory course.
**C**: and heterogeneous constant absolute risk aversion (CARA) Bernoulli utility.
.
|
BAC
|
ABC
|
ABC
|
ABC
|
Selection 4
|
In general, this paper contributes to the vast operational research literature that either post hoc analyses changes to scoring rules and laws in sports or proposes new changes (for recent surveys see Wright, 2014; Kendall and Lenten, 2017). <|MaskedSetence|> For instance, in the world’s most popular sport, association football, recent contributions have used simulations to explore whether incentives and outcomes could be altered significantly under different tie-breaking rules in round-robin tournaments (Csató, 2023; Csató, Molontay and Pintér, 2024), whether dynamic sequences in penalty shootouts could be fairer (Csató and Petróczy, 2022), and whether the allocation system for the additional slots of the expanded FIFA World Cup could be improved according to the stated goals of the organisers (Krumer and Moreno-Ternero, 2023).
Finally this paper builds on a growing sports economics and management literature studying various incentive issues in boxing and other combat sports (Akin, Issabayev and Rizvanoghlu, 2023; Amegashie and Kutsoati, 2005; Butler et al., 2023; Butler, 2023; Duggan and Levitt, 2002; Dietl, Lang and Werner, 2010; Tenorio, 2000). However, to the best of our knowledge, the incentives of boxing judges have not yet been studied, given the scoring rules they face, despite a well-developed literature on the influences and implications of biased decision making by the referees and judges in other sports (e.g., Dohmen and Sauermann, 2016; Bryson et al., 2021; Reade, Schreyer and Singleton, 2022), including other combat contests (Brunello and Yamamura, 2023)).
The remainder of our short paper proceeds as follows. <|MaskedSetence|> Section 3 describes our analysis and discussion of the model. <|MaskedSetence|>
|
**A**: Our work falls into the latter type of study, particularly where minimalist changes have been proposed that could still in theory substantially improve the fairness of sports outcomes.
**B**: In Section 2, we setup a styled model of potentially biased judging in a boxing contest.
**C**: The detailed proofs of the main propositions regarding the scoring rules are presented in the Online Appendix, as are variations on the main results from simulating the model..
|
ABC
|
ABC
|
BAC
|
ABC
|
Selection 4
|
Thus, the intersection of price discrimination and consumer privacy is a timely and intriguing topic. One might initially assume that these two concepts are at odds. Price discrimination, by its very nature, relies on detailed information about consumers to tailor prices effectively. In contrast, an emphasis on consumer privacy could limit the ability of sellers to implement price discrimination strategies. <|MaskedSetence|> Understanding how privacy constraints affect the ability of firms to engage in price discrimination helps in understanding existing market dynamics and also aids in shaping policies that protect consumer interests.
Our study of the intersection of price discrimination and privacy builds on prior work of Bergemann et al. [2015]. Let us first describe their model and then explain how things change if we consider consumers’ privacy concerns. Consider a monopolist who is selling a product to a continuum of consumers. If the producer knows the values of consumers, then they could engage in first-degree price discrimination to maximize their utility, while the utility of consumers would be reduced to zero. On the other hand, if the producer has no information besides a prior distribution about the value of consumers, then they can only enact uniform pricing. This leads to two pairs of producer utilities and consumer utilities. Bergemann et al. [2015] delineate the set of all possible pairs by using different segmentations. Let us clarify this approach with an example. Imagine a producer aiming to set a product’s price for its customers across the United States. In this context, the aggregated market is defined as the entire US market, meaning the distribution of consumers’ valuations nationwide. Each zip code within the US represents its own market, which is basically the distribution of the consumers’ values in that zip code. <|MaskedSetence|> <|MaskedSetence|> Bergemann et al. [2015] establish that the set of all possible pairs of consumer and producer utilities is a triangle as depicted in Figure 1. The point Q𝑄Qitalic_Q corresponds to the first-degree price discrimination, and the line TR𝑇𝑅TRitalic_T italic_R represents all possible consumer utilities when the producer utility is at its minimum, equating to the utility derived from the uniform pricing.
.
|
**A**: However, consumers can benefit from price discrimination, and thus, this intersection warrants a more nuanced examination.
**B**: Segmentation, in this case, refers to the division of the aggregated US market into these specific markets.
**C**: Generally speaking, a segmentation involves breaking down the aggregated market into smaller markets, under the condition that these smaller markets must add up to the aggregated market.
|
ABC
|
ABC
|
ABC
|
CAB
|
Selection 3
|
<|MaskedSetence|> For instance, in the problem of vaccine allocation, age and medical history are good indicators of need. Similarly, tax data proxies for one’s wealth, even if some income sources or assets remain unobserved. <|MaskedSetence|> Such uncertainty can still be substantial even in means-tested programs.777https://thehill.com/regulation/administration/268409-outrage-builds-over-wealthy-families-in-public-housing/
Still, what can the government do if it perfectly observes agents’ need for money α𝛼\alphaitalic_α? As it turns out, the set of equitable allocation rules that are implementable with one instrument when α𝛼\alphaitalic_α is observed is identical to the set of equitable allocation rules that are implementable with two instruments when α𝛼\alphaitalic_α is private. <|MaskedSetence|>
|
**A**: Intuitively, when the government observes α𝛼\alphaitalic_α, it can ‘control for’ the fact that some agents prefer the good because they are wealthy and screen purely based on need..
**B**: In such cases, agents’ private information can be thought of as residual uncertainty after accounting for observables.
**C**:
While I assumed that neither need nor wealth are observable, the government usually has some information about them.
|
CBA
|
CBA
|
ACB
|
CBA
|
Selection 4
|
Generalized principal-agent problem, originally proposed by (Myerson, 1982), is a general model that includes auction design, contract design, and Stackelberg games. Gan et al. (2024) further generalize it to include Bayesian persuasion.
We take the complete-information variant of (Gan et al., 2024)’s model.111 The models of Myerson (1982) and Gan et al. (2024) allow the agent to have private information. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Our conclusions do not extend to the private-information case.
There are two players in a generalized principal-agent problem: a principal and an agent.
**B**: Our work studies a complete-information model.
**C**: The principal has a convex, compact decision space 𝒳𝒳\mathcal{X}caligraphic_X and the agent has a finite action set A𝐴Aitalic_A..
|
BAC
|
BAC
|
BAC
|
BCA
|
Selection 2
|
<|MaskedSetence|> Identification and estimation in the MRDD setting, and how they differ from the single-dimensional RDD, have been investigated by imbens2009regression and papay2011extending, respectively; other results are also discussed in wong2013analyzing, and imbens2019optimized. So far, to my knowledge, there is no research explicitly dealing with extending the framework proposed by lee2008randomized and discussing manipulation tests in the MRDD context. <|MaskedSetence|> None of these approaches justifies the implemented procedures333snider2015barriers recognize that formal results are missing, asserting that Extending formal tests to check for the strategic manipulation […] with a two-dimensional predictor vector is not immediately clear., while my test is supported by a model and backed by statistical theory. <|MaskedSetence|>
|
**A**: Interestingly, though, manipulation tests are run by several applied papers employing MRDD (see the survey in Table 1): they appeal to disparate approaches, with different null hypotheses, assumptions, and test statistics.
**B**: Local asymptotic analysis and Monte Carlo simulations confirm my test’s advantages in terms of size control and power in realistic settings.
.
**C**:
I am not the first to study MRDD from a theoretical perspective.
|
CAB
|
CBA
|
CAB
|
CAB
|
Selection 1
|
<|MaskedSetence|> The process begins with voting, where human or AI players report preferences on social welfare objectives to a voting mechanism (1). <|MaskedSetence|> <|MaskedSetence|> This POMG unfolds over several timesteps H𝐻Hitalic_H (3). Following the POMG, game state information is extracted to initiate n𝑛nitalic_n last POMG state used as the first game state of the new round. This whole process is repeated again in the next round.
.
|
**A**:
Figure 1: The proposed framework.
**B**: This an objective for a Principal policy-maker, who designs a parameterized N𝑁Nitalic_N-player Partially Observable Markov Game (POMG) (2).
**C**: The players are the same as the voters.
|
ACB
|
ABC
|
ABC
|
ABC
|
Selection 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.