text_with_holes
stringlengths
166
4.13k
text_candidates
stringlengths
105
1.58k
A
stringclasses
6 values
B
stringclasses
6 values
C
stringclasses
6 values
D
stringclasses
6 values
label
stringclasses
4 values
<|MaskedSetence|> Exogenous controls are the sales in 2018, the employees in 2018 the German state and the legal status (Rechtsform). <|MaskedSetence|> This leads to a sample size of n=1𝑛1n=1italic_n = 1,131131131131,230230230230. We believe that it is reasonable to assume the existence of normally distributed latent terms, which determine the sales and employee growths. These terms refer to “management intelligence” and there is evidence for the fact that such intelligence-related terms are normally distributed (Breitung et al., 2024+). <|MaskedSetence|>
**A**: Potential endogenous variables are the sales growth from 2018 to 2019 and the employee growth from 2018 to 2019. **B**: On the other hand, growth values are typically non-normally distributed, so that our nonlinearity condition should be fulfilled. . **C**: Companies whose sales in 2018 are below and above the 5% and 95% quantiles of these sales are excluded.
CBA
ACB
ACB
ACB
Selection 3
<|MaskedSetence|> Weighted DFs are obtained from the average of all pre-post combinations of periods in the data. 999 cluster empirical bootstrap (at the block level) are used. <|MaskedSetence|> QTT confidence bands are constructed from DF bands with joint 90%percent9090\%90 % coverage. <|MaskedSetence|>
**A**: The standard normal is the working CDF. . **B**: The shaded areas correspond to 90% uniform confidence bands using R code from \citetchernozhukov2019generic. **C**: Notes: DTT and QTT are computed from weighted DFs.
CBA
CBA
CBA
BAC
Selection 2
<|MaskedSetence|> Relatively few, however, have focused on the cost of generating persuasive messages. In practice, such costs play an important role in determining when, where, and how persuasion takes place. <|MaskedSetence|> <|MaskedSetence|> Closest to the present paper is that of Dughmi and Xu (2016), who characterize the complexity of the model introduced by Kamenica and Gentzkow (2011). .
**A**: The purpose of this paper is to formally establish a key driver of these costs—namely, the computational resources (natural or artificial) required to generate persuasive messages. Given a suitable formalization of the persuasion problem, computational complexity theory can be used to characterize its production function. **B**: A variety of authors have advanced theories of the benefits of persuasion—i.e., what persuaders can achieve in various circumstances (e.g., Milgrom and Roberts, 1986; Grossman and Hart, 1980; Crawford and Sobel, 1982; Kamenica and Gentzkow, 2011; Schwartzstein and Sunderam, 2021; Aina, 2021). **C**: This approach has previously been applied to explain a variety of economic and social phenomena, such as incomplete uptake of public information (Aragones et al., 2005), slow convergence to Nash equilibrium (Daskalakis et al., 2009), narrow choice bracketing (Camara, 2022), and persistent market inefficiencies (Spear, 1989).
BAC
BAC
BAC
BAC
Selection 3
<|MaskedSetence|> Bias adjustments are costly in the sense that they reduce the signal strength of the instruments. We show that a first-stage Ridge regression reduces the implicit price for a bias adjustment, which comes along with improved (finite-sample) properties in the second stage. <|MaskedSetence|> <|MaskedSetence|> This can be considered an average-case analysis over dense parameters. Chamberlain and.
**A**: Swanson, 2005; Donald and Newey, 2001). **B**: We assume that each of the instruments has only a small, independent random effect on the endogenous variable. **C**: Finally, the results mentioned above generalize to settings where the number of instruments is larger than the number of observations, which has not been discussed in the IV literature so far. To derive our results, we make the following assumptions.
ACB
ACB
ACB
CAB
Selection 1
We take a step towards obtaining a provable two-sided learning algorithm for decentralized matching markets. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Further, CA-ETC is decentralized. Note that in real world applications, like two-sided labor markets (Upwork, Uber, Restaurant), crowd-sourcing platforms (Mechanical Turk), scheduling jobs to servers (AWS, Azure) Dickerson et al. (2019); Even et al. (2009), the preference of players side are apriori unknown and a two-sided learning algorithm is necessary. .
**A**: Our proposed schemes, ETGS (with blackboard) and CA-ETC are based on Explore Then Commit (ETC) algorithm and are also collision free, similar to Kong & Li (2023). **B**: However, the algorithms are able to learn the preferences for both the agent and the arm side simultaneously through obtained samples in the exploration phase. **C**: To be concrete, we do not assume that the preference of the players are known to the arms apriori.
CAB
CAB
CAB
CAB
Selection 3
On the other hand, theories may be unnecessary for AI. Drawing a parallel from speech recognition, significant advancements have been achieved by prioritizing prediction over theory. Fred Jelinek, director of IBM’s speech recognition lab, famously stated, “Every time I fire a linguist, the performance of the speech recognizer goes up” (Moore 2005). <|MaskedSetence|> This aligns with Shmueli’s (2010) argument that accurate prediction can sometimes eliminate the need for theoretical explanation, and DeStefano et al.’s (2022) finding that providing humans with interpretable algorithms may sometimes lead to worse decisions. It remains uncertain whether the role of theory in AI-driven strategy will align more closely with Lewin’s theory-centric view or Jelinek’s data-driven approach. <|MaskedSetence|> <|MaskedSetence|>
**A**: Regardless, the integration of AI into SDM offers a unique opportunity to explore and potentially redefine the role and necessity of theories within strategy. **B**: This perspective suggests that AI might make strategic decisions without relying on explicit theories, instead leveraging its vast data processing capabilities to make predictions directly (see, e.g., Ludwig and Mullainathan 2024). **C**: This exploration may lead to new insights about the nature of strategic thinking itself, whether performed by humans or AI systems..
BAC
BAC
BAC
ABC
Selection 3
<|MaskedSetence|> <|MaskedSetence|> This is illustrated by the leap in power harnessed per unit of information in the transition from prokaryotes to eukaryotes (Fig. 1). Figure 1: Scaling of metabolic rate to cell volume. The y-axis represents the power harnessed per unit of information, in watts per gene, with prokaryote and eukaryote averages in grey lines. <|MaskedSetence|>
**A**: Figure from  [45].. **B**: In its mature form, the eukaryotic cell combines revolutionary increases in information storage and processing with much more powerful metabolic systems (epitomized by the mitochondrion). **C**: The eukaryotic cell - arguably the most important development in evolution after the emergence of life itself, and the basis for all complex organisms - is a perfect example.
CBA
ACB
CBA
CBA
Selection 3
<|MaskedSetence|> <|MaskedSetence|> Соответственно, наличие этой периодичности необходимо учитывать при построении модели, а также в предварительном анализе, связанным с тестированием наличия единичных корней в панельных данных. <|MaskedSetence|> Были предложены тесты как на основе модели пула, так и на основе комбинации p𝑝pitalic_p-значений. Авторы получают классические асимптотические распределения (нормальное и хи-квадрат) для рассматриваемых тестов при фиксированном N𝑁Nitalic_N и T→∞→𝑇T\rightarrow\inftyitalic_T → ∞. .
**A**: Периодичность может быть как квартальная и месячная, так и дневная и внутридневная. **B**: Поскольку экономические и финансовые данные часто наблюдаются с некоторой периодичностью, возникает необходимость тестировать наличие сезонных единичных корней, то есть единичных корней на различных частотах. **C**: Тестирование наличия сезонных единчных корней в панельных данных было рассмотрено в Lee and Shin (2006) с использованием подхода Shin and Kang (2006), основанного на инструментальных переменных.
BAC
BAC
CAB
BAC
Selection 2
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Some numerical experiments are provided in Section 4. Section 5 concludes this paper. The appendices contain some of the detailed calculations which are omitted in the main body of the paper and also some extensions of the baseline affine SV model. .
**A**: In Section 3, we derive our MM estimators and establish the central limit theorem. **B**: In Section 2, we present the affine SV model and its baseline SV model, then calculate the moments and covariances of the returns of the asset price. **C**: The rest of this paper is organized as follows.
CBA
BAC
CBA
CBA
Selection 1
1.2 Related Literature ‘Framing’ analysis Unfortunately, ‘framing’ analysis does not have a clear definition in the computational literature, as evidenced by the variety of tasks that arise in a recent survey of 37 ‘framing’ studies (Ali and Hassan 2022). A starting point for many framing approaches is to leverage existing corpora of labelled datasets. Here, the media frames corpus (MFC) (Card et al. 2015) and the gun violence frame corpus (GVFC) (Liu et al. 2019a) have been the basis of many methodological contributions. However, these datasets conceptualise ‘framing’ as dimensions (ala Chong and Druckman (2007)) of a topic or issue, not conceptual frames as we have distinguished earlier. <|MaskedSetence|> 2013) such as ‘economic’, ‘public opinion’ and ‘cultural identity’ applied to three issues (‘immigration’, ‘smoking’, and ‘same-sex marriage’). Thousands of annotations were recorded as to whether one of these dimensions were associated with the text on a given issue. <|MaskedSetence|> Typically, computational methods approach framing in this way as a multi-class classification problem, using supervised machine learning methodologies such as featuring engineering, classifier selection and k-fold evaluation (Field et al. 2018; Liu et al. 2019a; Akyürek et al. 2020; Zhang et al. <|MaskedSetence|> Common to all of these approaches is the need for large amounts of labelled ground-truth data, and consequently, the outcome that methods are not generalisable beyond the topics under study..
**A**: The MFC is composed of 15 generic ‘frames’ (Boydstun et al. **B**: Likewise, the GVFC follows a similar approach, albeit tied more tightly to gun violence, ‘frames’ are equivalent to dimensions, and include ‘2nd Amendment’, ‘Politics’, and ‘Public Opinion’. **C**: 2023).
ABC
ABC
ABC
ABC
Selection 1
<|MaskedSetence|> For reference, we have added a vertical dashed line at 12 years of education, which corresponds (approx.) with a complete senior high school. <|MaskedSetence|> On the other hand, except for Phuket with 12.79 years of education on average, all the provinces that do not belong to the Bangkok-Pattaya region have an average years of education below 12 years, while the national average is of 10.87 years. <|MaskedSetence|> For reference, we have added a vertical dashed line at 12 years of education, which corresponds (approx.) with a complete senior high school. The national average is of 10.87 years..
**A**: In Table 5 we present the average years of education for each region. Figure 12: Regression fitted for each one of the regions. **B**: We observe that Bangkok-Pattaya is the only region whose average years of education is above the 12 years threshold, with most of the provinces in the region above the threshold, with the only exceptions of Chonburi and Rayong which are a little below the threshold with 11.98 and 11.93 years, respectively. **C**: Finally, in Figure 12 we present the regression fitted for each one of the regions.
CBA
CBA
CBA
CBA
Selection 2
In contrast, characterizing optimal mechanisms in a setting with multiple objects and multi-dimensional types is known to be challenging. Despite revenue maximization being an important issue that many researchers have tackled (e.g., Manelli and Vincent, 2006; Pavlov, 2011; Haghpanah and Hartline, 2021; Giannakopoulos and Koutsoupias, 2018; Daskalakis et al., 2015), no complete characterization has been derived even for simple scenarios involving just two bidders and two items. <|MaskedSetence|> Fotakis et al. (2016); Goldner and Lundy (2024) have proposed approximation mechanisms that provide certain performance guarantees, yet these do not lead to optimal solutions in general domains. The matching theory literature has also characterized efficient ordinal mechanisms (Bogomolnaia and Moulin, 2001; Che and Kojima, 2010; Liu and Pycia, 2016; Ashlagi and Shi, 2016) and applied the insights to practical applications such as school choice (Abdulkadiroğlu and Sönmez, 2003; Abdulkadiroğlu et al., 2009). <|MaskedSetence|> <|MaskedSetence|>
**A**: To address this challenge, we utilized extreme value theory to fully characterize the efficient mechanism in the large market limit under certain conditions. . **B**: However, it remains unknown whether and under which conditions ordinal mechanisms, which are no-screening mechanisms in the sense that they do not require money burning, can perform better than screening mechanisms. **C**: Moreover, the extant partial characterization has revealed that the form of revenue-maximizing auctions varies significantly depending on specific situational details. The literature has also shown that maximizing the residual surplus in a multi-object environment is as challenging as maximizing the revenue.
CBA
BAC
CBA
CBA
Selection 4
the data [From the 80 million tiny images] need[ed] to be manually cleaned in order to make a sort of good supervised training dataset. <|MaskedSetence|> And so he organized the CIFAR summer school, he got all the summer school students sitting down. <|MaskedSetence|> (Fergus, 2022, Interview No. <|MaskedSetence|>
**A**: 2). . **B**: And Geoff wants you to do this. **C**: So how did it work? So I think Alex Krizhevsky and I wrote a labelling routine to actually, you know, have labelling interface where all the students would sit down, and we will go through the images, cleaning them up, and we decided that, Geoff decided he was going to pick, you know, these 10 Super categories, and then each one of which had subcategories that form the CIFAR 100.
BCA
BCA
BCA
ABC
Selection 3
<|MaskedSetence|> The notion of sink equilibrium, a strongly connected component with no outgoing arcs in the strategy profile graph associated with a game, was introduced in [22]. <|MaskedSetence|> The value of a sink equilibrium is defined as the expected social value of the steady state distribution induced by a random walk on that sink. Later work established further connections between Price of Sinking and Price of Anarchy via the (λ,μ)𝜆𝜇(\lambda,\mu)( italic_λ , italic_μ )-robustness framework [42]. <|MaskedSetence|>
**A**: A number of negative, PSPACE-hard complexity results for analyzing and approximating sink equilibria have been established in different families of succinct games [20, 32].. **B**: Sink equilibria. **C**: They also defined an analogue to the notion of Price of Anarchy [28], the Price of Sinking, the ratio between the worst case value of a sink equilibrium and the value of the socially optimal solution.
BCA
CAB
BCA
BCA
Selection 4
Ex post incentive-compatible mechanism design via deep learning In recent years, deep learning approaches to design auction mechanisms have received significant attention (Golowich et al., 2018; Feng et al., 2018; Duetting et al., 2019). <|MaskedSetence|> <|MaskedSetence|> Essentially, they seek to identify a mechanism where a bidder has approximately no incentive to conceal its valuation, regardless of the reported valuations of the opponents—a property that does not hold for most mechanisms used in practice. Duetting et al. (2019) offer a concentration bound to empirically assess the violation of incentive compatibility. However, this bound presumes that the ex post violation can be precisely determined, an assumption not met by their methodology. (Curry et al., 2020) address this issue by linearizing the learned neural network, effectively reducing the problem to an integer program that allows for an accurate estimation of the error. <|MaskedSetence|> (2023) use deep learning to learn auction mechanisms within randomized affine maximizer auctions, a class within which each mechanism is exactly incentive compatible. .
**A**: These constraints enforce the mechanism to be ex post incentive compatible over a set of buyer values sampled from the prior distribution. **B**: Curry et al. **C**: These efforts aim to design mechanisms that are nearly incentive compatible by incorporating constraints into the deep learning optimization problem.
CAB
CAB
CBA
CAB
Selection 4
Table LABEL:tab-01 illustrates that recent studies have evaluated CEE across national, regional, industry, provincial, and municipal levels, considering regular, embedded carbon emissions, water pollution and carbon neutrality, and coordinated governance perspectives. The primary DEA-based non-parametric methods for CEE include radial, non-radial, and directional distance functions. In terms of the radial model, Liu et al. (2023) applied the BCC model to assess changes in industrial eco-efficiency across 16 prefecture-level cities in Anhui, China from a static viewpoint. Ding et al. (2019) performed a comparative analysis of CEE among 30 provinces in China using the CCR and BCC models. However, these conventional radial models overlook the selection of radial direction in efficiency measurement and encounter issues with slack efficiency measurement (Tian and Mu, 2024). To address these issues, several studies have utilized the non-radial super-SBM model to measure CEE. For example, Jiang et al. (2020) applied super-SBM to evaluate CEE in the logistics industry across 12 pilot regions in China. Gao et al. (2021) integrated the trade openness factor into the embedded carbon emission perspective and employed super-SBM to analyze CEE across 28 industrial sectors in China. Jiang et al. (2024) measured the CEE of 30 cities in Northwest China from 2011 to 2020 using a super-efficient SBM model based on the dual perspectives of water pollution and carbon neutrality. <|MaskedSetence|> (2022) utilized super-SBM to assess CEE at 42 thermal power plants in China in 2020 from a microscopic perspective. Meanwhile, to enhance environmental efficiency assessment incorporating undesirable outputs, Chung et al. (1997) introduced the radial DDF based on Shepherd’s approach. However, the radial DDF fails to eliminate inefficiencies caused by input and output slack, potentially leading to overestimating CEE. Färe and Grosskopf (2010) introduced a generalized NDDF for total factor energy productivity, relaxing the requirement for desired and undesirable outputs to vary proportionally. Fukuyama and Weber (2009) developed the SBM-DDF model for CEE, which integrates undesirable outputs to mitigate radial and directional biases. Moreover, some studies have proposed a multi-stage DEA model combining parametric and non-parametric approaches (Zhao et al., 2022). Among them, the most representative is the three-stage DEA,which is capable of incorporating environmental factors and random noise in the assessment of DMU efficiency (Hu and Xu, 2022). However, it is noteworthy that the assessment of CEE depends heavily on the value judgments that policymakers make about the resource allocation scenarios and the future of the economy and the environment (Xu et al., 2023). <|MaskedSetence|> In the assessment of CEE, policy preferences can influence local behavior and decision-making through a variety of mechanisms that promote the transition of the industrial sector towards a higher level of efficiency and cleaner production patterns (Wu et al., 2017). The empirical study conducted by Meng et al. (2021) revealed significant discrepancies in the carbon emission performance of the manufacturing sector when subjected to scale-oriented and innovation-oriented carbon reduction policy preferences. Therefore, a profound comprehension and rigorous consideration of policy preferences is essential to assess alterations in CEE with precision. Such an analysis will assist the government in formulating more effective carbon emission reduction policies, considering the varying circumstances of different regions. Nevertheless, only a limited number of studies that assess CEE take policy preferences into account. <|MaskedSetence|> These factors could significantly influence the assessment of CEE based on policy preferences, which represents a limitation of the current research (Qu et al., 2022). In conclusion, the objective of this study is to propose a model for measuring total factor CEE that can accommodate the various policy preference scenarios and account for potential data uncertainties..
**A**: In addition, we should consider the potential implications of data uncertainties, which may arise from factors such as statistical inaccuracies or human interference. **B**: This process highlights the potential impact of policy preferences on CEE, which refers to the specific preferences or prioritized objectives the government holds when formulating policies or selecting options. **C**: Fang et al.
CBA
CBA
BAC
CBA
Selection 1
As Wald (1940) was first to discover, it can be instructive to think of minimax decision rules as equilibrium rules in a fictitious, zero-sum game against Nature. In particular, any equilibrium of the game in which the decision maker picks a decision rule and an adversarial Nature picks a prior characterizes a minimax (regret) decision rule and corresponding prior; furthermore, known features of zero-sum games may allow one to claim uniqueness of the solution and the like.999See Lehmann and Casella (1998, Theorem 1.4, Chap. <|MaskedSetence|> The first explicit use of this technique in the literature on decisions under partial identification is in Stoye (2007). <|MaskedSetence|> They are liberally used in this paper’s proofs; we here use them to briefly discuss some intuitions and contrast ex-ante versus ex-post robust Bayes, but also randomization versus no randomization, in terms of the fictitious game’s timing. First, this perspective gives further intuition for why MMR rules tend to randomize. Indeed, nontrivial simultaneous move zero-sum games frequently fail to have pure strategy equilibria, and that is precisely what happens here. Furthermore, in those cases where optimal decision rules randomize, the equilibria are “pooling” equilibria in which data are noninformative. <|MaskedSetence|> This is also true for some nonrandomized threshold rules from Theorem 1(iv); these avoid randomization because uninformative features of the data can be used as randomization device, not because any updating occurs in equilibrium..
**A**: This explains why these rules also tend to be nonunique; what disciplines them is Nature’s best-response condition, not learning from data in equilibrium. **B**: While usually stated for “agnostic” minimax rules, it is easy to see that these facts extend to our setting. **C**: 5) for a formal statement.
CBA
CBA
CBA
BCA
Selection 1
In this paper we have studied how collusive strategies arise in an open loop two players’ optimal execution game. We first have introduced the concept of collusive Pareto optima, that is, a vector of selling strategies whose IS is not dominated by the IS of other strategies for each iteration of the game. <|MaskedSetence|> Finally, we have shown how the Pareto-optimal strategy, that is indeed the collusion strategy in this setup, is the TWAP for risk-neutral agents. As our main contribution, we have developed a Double Deep Q-Learning algorithm where two agents are trained using RL. The agents were trained and tested over several different scenarios where they learn how to optimally liquidate a position in the presence of the other agent and then, in the testing phase, they deploy their strategy leveraging on what learnt in the previous phase. The different scenarios, large, moderate and low volatility, helped us to shed light on how the trading interactions on the same asset made by two different agents, who are not aware of the other competitor, give rise to collusive strategies, i.e. <|MaskedSetence|> This, in turn, is due to agents who keep trading at a different speed, thus adjusting the speed of their trading based on what the other agent is doing. <|MaskedSetence|>
**A**: strategies with a cost lower than a Nash equilibrium but higher than a proper collusion. **B**: Agents do not interact directly and are not aware of the other agent’s trading activity, thus strategies are learnt from the information coming from the impact on the asset price in a model-agnostic fashion.. **C**: We furthermore showed how a Pareto-efficient set of solutions for this game exists and can be obtained as a solution to a multi-objective minimisation problem.
BCA
CAB
CAB
CAB
Selection 4
The methods applied to more than the settings of online advertising. In the field of economics, particularly in policy evaluation, the assumptions of group independence and randomness are often violated due to interconnected markets and spillover effects. For instance, a government subsidy in one region can influence prices and demand in neighboring regions, complicating the causal interpretation of policy impacts. <|MaskedSetence|> SPORTSCausal provides a robust framework to account for these complexities, enabling more accurate estimation of policy impacts by adjusting for time-series spillovers. This is particularly valuable in economic studies where interventions can have far-reaching and dynamic effects across different regions or markets. In bioinformatics, high-density peptide arrays, for example, are essential for studying peptide-protein interactions (Zheng et al., 2020, Mergaert et al., 2022, Amjadi et al., 2024, Parker et al., 2024), but their densely packed format can lead to spillover effects, where the binding of one peptide affects neighboring ones. These effects can distort estimates of true binding affinities, complicating the interpretation of results. SPORTSCausal helps address these challenges by accounting for spillover in time-series data, providing more accurate estimates of peptide-antibody interactions. This is crucial for applications like vaccine development, where precise identification of immunogenic peptides is vital. In social and behavioral sciences, particularly in psychology, interventions often influence not only the participants but also those around them, leading to spillover effects. For instance, studies like (Liu et al., 2022) highlight the importance of retrieval practice in enhancing memory. However, when applying such practices in group settings, spillover effects can occur, where the benefits of one participant’s practice may influence others, potentially confounding the results. <|MaskedSetence|> <|MaskedSetence|> This leads to more reliable conclusions that can better inform educational and therapeutic strategies. 4 Conclusions.
**A**: SPORTSCausal is well-suited to address these complexities by accounting for spillover effects in the analysis of memory interventions. **B**: Traditional econometric models might overestimate or underestimate the true effect of such interventions due to unaccounted spillovers. **C**: By applying this method, researchers can obtain a clearer picture of the true impact of retrieval practices, ensuring that the observed improvements in memory are accurately attributed to the intervention itself, rather than external influences.
BAC
ACB
BAC
BAC
Selection 3
We also note that conducting A/B tests (AB, ) (i.e., randomized control trials, which are generally regarded as the gold standard for causal analysis) with the full ES experience is often impossible due to practical or ethical constraints. It might be possible to use A/B experimentation for valuing a specific feature of an ES, but such an approach usually only operates with a small customer sample and requires a disciplined methodology to map to the entire population. Moreover, A/B testing may prove expensive when done repeatedly for new feature rollout, and may not be possible for all available features. This motivates our approach, where we leverage observational causal modeling, which requires additional assumptions but provides a scalable and repeatable solution paradigm. In this work, we build a large-scale Dynamic Causal Model (DCM) trained on billions of customer actions, and apply it to the problem of estimating the value generated by an ES. The DCM helps us answer a counterfactual question – ”How much money would the business lose if the Engagement Surface didn’t exist at all (or didn’t exist during a specific period of time)?”. Measuring this associated treatment effect is a good proxy for the value generated by the ES. <|MaskedSetence|> The paper is structured as follows. <|MaskedSetence|> <|MaskedSetence|> We provide an overview of implementation in Sec. 4 and showcase results in Sec. 5. We end with conclusions and our recommendations for further study in Sec. 6..
**A**: In Sec. 2, we provide an introduction and motivation for the dynamic causal model. **B**: In Sec. 3, we discuss inclusion of contemporaneous treatment effects arising from concurrent customer behavior (which is relevant for the use case of valuing an ES). **C**: We highlight that while our solution is applicable to any ES, we focus in particular on cases where ES features may be highly interconnected in customer usage patterns, and may drive the most value when operating synchronously. To the best of our knowledge, this is the first paper presenting a scalable and generalizable causal model for estimating the value generated by an engagement surface.
CAB
CAB
CAB
CAB
Selection 4
<|MaskedSetence|> The problem naturally fits in the bilevel optimization framework since the household’s solar adoption decisions across the population impact the central planner’s ability to achieve the targets, and the central planner can influence the household’s decisions by setting the subsidy levels. Once the central planner sets the subsidy policy, the households react to these values and either choose one of the solar products or do not adopt solar technology. <|MaskedSetence|> <|MaskedSetence|> In the following, we first present a household’s lower-level problem and then the central planner’s upper-level problem. 3.3.1 Household’s optimization problem.
**A**: We will refer to the lower level (respectively upper level) problem as the household’s (respectively central planner’s) optimization problem. **B**: In our model, the central planner is the leader, minimizing the average discounted cost due to subsidies, and the households in the region are the followers, minimizing their expected total discounted consumption, adoption, and compensation costs. **C**: We present a bilevel formulation to derive the optimal subsidy policy for the central planner.
BCA
CBA
CBA
CBA
Selection 4
<|MaskedSetence|> We only assume minimal rationality on the part of the bidders—bidders are rational in the sense of not playing obviously dominated strategies. <|MaskedSetence|> We first sketch the intuition, and then provide the formal arguments. At any history, consider a non-leading bidder’s choice as to whether to quit. <|MaskedSetence|> Suppose that there is some bundle for which the bidder’s valuation is higher than the current leading price for that bundle. Consider the following strategy where the bidder raises the price for this particular bundle and never revises her bid afterwards. Clearly, this continuing strategy guarantees a non-negative payoff for the bidder. Thus, at least for the purpose of deciding whether to quit, it is “obviously optimal” not to quit..
**A**: In this subsection, we study the strategic behavior of the bidders and establish the rank-guarantee property of CASA. **B**: Formally we adopt the solution concept of implementation in strategies that are not obviously dominated (see Li, (2017) and Li and Dworczak, (2021)). **C**: Obviously, as quitting is irreversible, quitting the auction leads to a best possible outcome of a zero payoff.
ABC
ABC
ABC
BCA
Selection 3
<|MaskedSetence|> (2)). <|MaskedSetence|> In our case, we are considering aggregated electricity consumption data across multiple crypto-ming firms, and we are not aware of the locations of mining firms, which is why, in this article, we consider average Texas temperature as a predictor. We observe, during the daytime, the strong correlation between temperature and system-wide electricity prices can obscure the cooling energy consumption and, even in non-summer months, temperatures can remain high into the late evening. As illustrated in Fig. 5, from 10 PM to 6 AM, both in non-summer and summer periods, we observe weak positive correlations between electricity consumption and temperature with p-values close to 0. <|MaskedSetence|>
**A**: The cooling requirements are influenced by factors such as ambient temperature, the efficiency of the cryptocurrency miners, and hashing energy consumption. **B**: This confirms the physical principle that higher ambient temperatures necessitate more electricity for cooling. . **C**: A significant portion of the energy consumed by cryptocurrency mining firms is dedicated to cooling (see eq.
ABC
CAB
CAB
CAB
Selection 2
At t=0𝑡0t=0italic_t = 0, households have $0 savings, firms have 0 units of inventory and set default prices and wages, the central bank sets default interest rate, and the government sets default tax rate and gives $0 of tax credits. For each subsequent step t≥0𝑡0t\geq 0italic_t ≥ 0, households take in observations to set labor hours and request consumption. <|MaskedSetence|> They then sets price, wage for the next step. <|MaskedSetence|> <|MaskedSetence|> The government collects taxes to set tax rate and distribute credits in the next step. .
**A**: The central bank monitors prices and productions to set interest rate for the next step. **B**: Firms use labor to produce goods (3)-(4), fulfil consumption (1) and update inventory (5). **C**: Each household updates savings based on realized consumption (2) and pays taxes to the government (2).
BCA
BCA
ACB
BCA
Selection 2
We can also construct a single all-effects network by summing all subnetworks, resulting in a multigraph where multiple edges between nodes are possible. For comparison against this general causality network we also construct a ‘baseline’ linear causality network by fitting a standard linear-VAR model. For consistency with our existing sub-networks, this standard VAR model is fit against the median target, which is a small departure from the traditional description shown in Section 3.1. We also compare against a Piecewise VAR network, which is the sum of the three median response networks. Figure 6 showcases the CCDF for these networks, while Table 7 provides summary statistics (we divide the multigraph degrees by 9, and the piecewise network by 3 to be on an equivalent scale). <|MaskedSetence|> The multigraph, which considers causality across a variety of variable scenarios identifies around 6 times the number of causal links compared to the standard VAR. Even restricting our attention to the median response, we see that splitting the linearity into our piecewise structure leads to 3 times as many identified relationships. <|MaskedSetence|> While the correlation structure for the standard VAR model resembles the linear-median response sub-network we previously identified, it surprisingly contains even fewer links than this sub-network. <|MaskedSetence|> A visual example of this could be seen in the Figure 1b, where the significant right tail-median positive effect could obscure the linear-median negative effect..
**A**: It would appear that rather than the tail-median effects producing false positives for linear-median effects, they actually obscure/cancel the linear-median effects. **B**: Both the multigraph and standard VAR networks lack a statistically significant relationship between out-degree and capitalisation, with the standard model having a significant negative relationship for in-degree, and the piecewise network having a significant relationship for out degree. **C**: The most stark difference is the number of identified links.
CBA
CBA
CBA
CBA
Selection 4
<|MaskedSetence|> The reason is that there are equilibria in which all solvers bid the minimum amount on individual trades, which is equivalent to skipping the individual auctions and only having a batch auction. If the individual auctions are in second price, then no solver can change this outcome by changing its bid on the individual auctions. Instead, when the individual trade-intent auctions are first-price auctions, each solver can manipulate the reference for fairness with its bids on individual trades, independently of the bids by other solvers. <|MaskedSetence|> <|MaskedSetence|> However, these higher fairness guarantees come at the expense of lower value returned to the traders. Mathematical Appendix.
**A**: We illustrates this point by showing that when the individual trade-intent auctions are second-price auctions, the fair combinatorial auction does not provide any additional fairness guarantees relative to the simple batch auction. **B**: In this case, we show that the fair combinatorial auction guarantees that all traders receive more than what they would earn in the simultaneous standard auctions. **C**: Hence, even though not all fair combinatorial auctions provide strong fairness guarantees in equilibrium, some do.
ABC
ABC
ABC
ABC
Selection 4
Interviewees agreed that IPRs can give foreign investors a sense of security in their investment, as argued in the literature (see section 2.2). <|MaskedSetence|> One foreign entrepreneur in Bangladesh highlighted that ‘weak IPR enforcement in Bangladesh is unlikely to discourage technology transfer if there is a strong enough business case to enter the market, and it seems that foreign companies are aware of and willing to accept the risks associated with entering the market in a low-income country’. <|MaskedSetence|> Trademarks are relevant to many small- to medium-sized enterprises as a means of establishing a brand. <|MaskedSetence|> Securing a trademark registration was considered to be much easier than applying for a patent..
**A**: However, it was also said that final investment decisions would be driven by the business case and market conditions. **B**: Other barriers to ITT that were raised included a lack of local expertise in implementing or maintaining climate technologies and a lack of knowledge about solutions available on the international market that could be applied in Bangladesh. Despite the overall low level of awareness and general scepticism about the current role of IPRs in Bangladesh, respondents indicated that trademarks are relatively widely used because the system is user-friendly and appropriate for the economic setting. **C**: In addition, regulatory requirements relating to standards require traders to register their trademarks with the Bangladesh Standards and Testing Institution.
ABC
CBA
ABC
ABC
Selection 4
• Total demand prediction. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> For example: economic activity rate, national currency exchange rate, demographics, and mortgage interest rate. To predict the total demand, one can use the full range of machine-learning methods and a large array of historical data, not necessarily data from a specific developer, since macroeconomic factors affect all developers similarly. .
**A**: There is an abundance of factors affecting total demand. **B**: In this paper we consider total demand as a known function. **C**: To fully implement the methods discussed here in real-life scenarios, it is essential to incorporate predictions of future total demand.
BCA
BCA
ACB
BCA
Selection 2
4 Empirical Strategy Our empirical strategy consists of three parts. <|MaskedSetence|> Second, we examine how the HEE affects people’s educational attainment in colleges and senior high schools by exploiting individual characteristics from the 2010 population census.212121As you will see, the magnitude of the HEE is a posterior term, the effect on college enrollment could be mechanical. <|MaskedSetence|> In addition to educational attainment, we also explore other outcomes associated with education (e.g., income and working industry). Finally, we switch to the panel data on rural villages and households from NFPS. <|MaskedSetence|>
**A**: Nevertheless, the HEE cannot directly affect people’s enrollment in senior high schools, so it represents their willingness to pursue education as more post-secondary education opportunities emerge, which we regard as the spillover effects of the HEE. **B**: We analyze how rural people respond to the HEE regarding their agricultural activities, work decisions, and migration decisions, assessing the consequences of the HEE in rural areas.. **C**: We first propose our preferred measure for the magnitude of the HEE across provinces and check its correlation with other reasonable measures.
CAB
CAB
CAB
ABC
Selection 1
2 Related Work Xia (2013) and Procaccia et al. (2009) proposed incorporating voting axioms into a machine-learning framework as a means of evaluating learned social choice mechanisms. In the space of auction design and matching there has been work on using neural nets for better mechanisms Dütting et al. <|MaskedSetence|> (2022) as well as complex preference structures Peri et al. <|MaskedSetence|> <|MaskedSetence|> (2021) has looked at how to learn new allocation mechanisms that bridge the gap between stability (as compared to the deferred acceptance algorithm Gale and Shapley (1962)) and strategyproofness (as compared to random serial dictatorship (RSD) Aziz et al. (2013a)). While the work of Ravindranath et al. (2021), Firebanks-Quevedo (2020), and most recently Anil and Bao (2021) has shown promise for learning mechanisms, these efforts do not closely consider the role of embeddings..
**A**: (2021). **B**: More recently, the work of Ravindranath et al. **C**: (2019); Pavlov (2011); Malakhov and Vohra (2008) including learning new types of auction mechanisms Curry et al.
CAB
CAB
BAC
CAB
Selection 2
<|MaskedSetence|> <|MaskedSetence|> Specifically, the zephyr database has a “deal comment” for each M&A event, which provides a comprehensive deal review for each transaction, and the product is described as follows:“Our researchers consolidated the information into a comprehensive deal review presented in chronological order. Additional information includes deal structure, financing and payment methods, and references to regulatory or shareholder issues. <|MaskedSetence|> The strategic rationale behind the transaction is highlighted through quotes when available”. We determined whether the M&A was a technological M&A by manually identifying this content; 2. If the industry of the target company is a high-tech industry, we define this event as a technological M&A. The definition of high-tech industry comes from National Bureau of Statistics (NBS), in the end, we identified a total of 7,867 technological M&A events. .
**A**: Following Ahuja and Katila (2001), we define technological M&As as: 1. **B**: If the description of an M&A event explicitly mentions that the purpose is to obtain technology, we consider this M&A event to be a technological M&A. **C**: The commentary is updated as the deal progresses.
ABC
ABC
ABC
ACB
Selection 3
<|MaskedSetence|> <|MaskedSetence|> The sender persuades the receiver by committing to a randomized signaling policy crafted with the aim of maximizing its utility. <|MaskedSetence|> We also get a linear programming characterization of the minimum amount of truth revealed to the receiver in any Stackelberg equilibrium. In the process we uncover a key element of such problems: persuasion must be subject to “trust constraints” for it to work. The influencer’s optimal policy must reveal some truth. .
**A**: We model an interaction between a strategic informed sender and an uniformed receiver. **B**: The receiver on the other hand attempts to know the true information of the sender. Our contribution is a characterization of the sender’s expected utility in a Stackelberg equilibrium [2] as a linear program. **C**: Our aim is to understand how much truth there is in such an influencer’s words.
CAB
CAB
CAB
ABC
Selection 2
<|MaskedSetence|> It also defines the treatment effects associated with our model. <|MaskedSetence|> The empirical investigation of the impact of an individual’s frequency of physical exercise on their level of self-esteem is described in Section 4. Section 5 concludes. <|MaskedSetence|>
**A**: The proof of our primary result, additional tables and simulation evidence are provided in the Appendices. . **B**: The following section outlines the model and describes the explicit manner in which social interactions enter the model and discusses the employed Bayesian Nash equilibrium concept. **C**: Estimation is addressed in Section 3.
BCA
BCA
BCA
ABC
Selection 3
In our empirical strategy, we regard the Korean applicants’ United States Patent and Trademark Office (USPTO) patents as indicators of the country’s frontier technologies (see Figure 1). Our results show that Korean frontier technologies tend to be globally more competitive (get more forward citations) when they build on learning experiences for incremental improvements and adaptation recorded as domestic UMs (captured by direct or indirect backward citation to UMs). <|MaskedSetence|> <|MaskedSetence|> In addition, we show that the country’s frontier technologies built upon UM contribute more to its own rather than to foreign innovations, compared to technologies without UM linkages. <|MaskedSetence|> Our findings are robust over alternative measures of UM reliance, time-variant effects to capture changes in IPR legislation, the relative use of UMs versus patents, and the role of the country’s major conglomerates. .
**A**: This underscores the long-term relevance of UMs for technological specialization at the frontier, which is an essential capability to produce high value-added technology that is different from what other countries can produce. **B**: These linkages and their impact weaken over the course of economic development, highlighting the particular relevance of the UM system in the early industrialization stage (in line with Kim et al. **C**: 2012).
BCA
BCA
ABC
BCA
Selection 2
The additional analysis on anticipated formal care use supports our findings on labor supply. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Our results indicate that the rate of coresidence with adult children did not change significantly, nor did older individuals’ expectations of receiving informal care from their adult children. This finding suggests that informal care remains preferable to formal care, consistent with our prior understanding of old-age support in China. .
**A**: However, this impact is predominantly driven by those with limited informal insurance. **B**: Overall, we find that the likelihood of anticipated formal care use increased by 3.1 percentage points following the LTC insurance rollout for individuals eligible for public insurance. **C**: Individuals living independently show a 7.7-percentage-point increase in expected formal care use, while those coresiding with children exhibit an insignificant change, with a point estimate of -0.8 percentage points. While our primary focus is on self-insurance, it is also important to examine whether public LTC insurance has crowded out informal insurance.
BAC
ABC
BAC
BAC
Selection 1
In this paper, we have introduced the Double/Debiased (Covariance-Oblivious) Convex Conditioned LASSO (Double/Debiased CoCoLASSO) estimator for treatment effects in high-dimensional settings with measurement error. <|MaskedSetence|> <|MaskedSetence|> Second, we propose a method of moments estimator for the measurement error variance, enabling implementation without prior knowledge of error covariance. Third, we establish theoretical results demonstrating N𝑁\sqrt{N}square-root start_ARG italic_N end_ARG-consistency and asymptotic normality of our estimator. Simulation studies corroborate our theoretical findings, demonstrating the estimator’s robustness to varying levels of measurement error and its superiority over naive approaches. <|MaskedSetence|>
**A**: Our key contributions are threefold. **B**: Notably, our covariance-oblivious approach nearly matches the performance of methods assuming known error variance, highlighting its practical utility.. **C**: First, we develop a Neyman-orthogonal score function valid under measurement error, incorporating a bias correction term for error-induced correlation.
ACB
ACB
ACB
CAB
Selection 2
Remark 4. One may expect positive dependence to make larger values of the sum in (6) more likely and thus the sum more likely to stochastically dominate a single random variable. We believe that this intuition does not hold due to the very heavy tails of the random variables under consideration. <|MaskedSetence|> <|MaskedSetence|> The situation is of course reversed for negative dependence. <|MaskedSetence|>
**A**: However, (6) can still hold for Pareto random variables with infinite mean that are positively dependent via some specific Clayton copula (see Chen et al., 2024c ). . **B**: If random variables are positively dependent and some of them do not take large values, it makes others more likely to take moderate values too, hence positive dependence hinders large values. **C**: It is known, for instance, that very large values of the sum of iid random variables with heavy tails are usually caused by a single random variable taking a large value, while other random variables are moderate.
BAC
CBA
CBA
CBA
Selection 3
Our study also provides further insights into the volatility dynamics by tracking the evolution of the probability distribution of the rescaled emissions growth rates and the changes in the scaling β𝛽\betaitalic_β parameter. The rescaled emission rates reveal that volatility dynamics are driven by significant changes in both tails of the distribution. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The evidence tends to support the latter scenario. In fact, larger regions, typically in higher-income countries, may have demonstrated greater adherence to ETS policy initiatives. This observation is consistent with the notion that regions hosting larger companies faced more stringent demands to reduce emissions swiftly. This connection is further supported by environmental indicators such as CO2 emissions, energy use, water extraction, and waste production, which tend to follow a power law relationship with company size [31]. Consequently, large companies, which significantly influence GDP growth volatility [32], also play a pivotal role in shaping the observed emissions dynamics..
**A**: This implies that, while decarbonising their economies, smaller economic regions are becoming less volatile or, conversely, larger economic regions are becoming more volatile. **B**: Therefore, both extreme decreases and increases in emission rates contribute to the overall volatility. On the other hand, we observed that the β𝛽\betaitalic_β parameter starts increasing from the implementation of the EU ETS and approaches zero during the transition from ETS-1 to ETS-2. **C**: This empirical evidence suggests no significant differences in volatility among regions of different economic sizes.
BCA
BCA
BAC
BCA
Selection 1
<|MaskedSetence|> This aggregate metric is often more relevant and scalable compared to evaluating the treatment effect on a specific subset of users or a particular segment of queries. <|MaskedSetence|> For instance, while understanding the treatment effect within a specific query segment might help fine-tune the user experience for that segment, knowing the overall impact across all users allows for more informed decisions when deploying changes at scale. This approach aligns with the broader objectives in fields such as marketing, where the goal is often to optimize the overall effectiveness of a campaign or product feature rather than just a localized improvement. <|MaskedSetence|>
**A**: By focusing on the population mean, researchers and practitioners can derive conclusions that are generalizable and applicable to the broader user base, making the findings more actionable for large-scale interventions. **B**: In the context of the Google Opinion Rewards (GOR) example discussed earlier, this population mean represents the overall treatment effect on all Google users. **C**: Similarly, in other domains like clinical trials or social sciences, researchers often aim to generalize findings to the entire population of interest, ensuring that the interventions are effective on a wide scale. .
BAC
BAC
BAC
CAB
Selection 3
<|MaskedSetence|> As Igami and Yang (2016, Section 3.2) reports, the average number of shops per market was less than 0.5 during 1970’s to approximately 1.8 in the early 2000’s. We now provide further evidence about the evolution of this industry. Figure 1 shows the average number of stores per market over time desegregated by chain. This figure shows that the average number of stores per market has increased across all chains, with the most significant increase observed for McDonald’s compared to the other chains. <|MaskedSetence|> This figure reveals that the frequency of empty markets decreased steadily between 1970 and 2000, while the frequencies of monopoly, duopoly, and triopoly or more steadily increased over the same period. In contrast, during 2000-2004, the frequency of each market type has remained relatively stable. <|MaskedSetence|> The hypothesis tests developed in this paper can be used to evaluate whether the state distribution is homogeneous over any of the periods in the sample. .
**A**: This evidence suggests that the Canadian fast-food industry has been evolving between 1970 and the early 2000’s, and may have reached a steady state during the last years of the sample. **B**: The data spans an extensive period in which the Canadian fast-food industry grew considerably. **C**: Figure 2 shows the average number of competitors per market over time.
BCA
BCA
ACB
BCA
Selection 4
They note that relaxing this restriction to allow confounds would lead to “distributional complications”; our framework and results resolve these complications and study the implications of the confounds. <|MaskedSetence|> <|MaskedSetence|> Indeed, they argue that network structure matters much less than the state and information structures in their setting. By contrast, we show that in a standard fixed-state environment learning can be quite efficient on some networks and highly confounded on others. <|MaskedSetence|>
**A**: They focus instead on how private signal precisions and the evolution of the state, which changes over time, affect learning. **B**: Variations in the network structure can trace out a wide range of learning efficiencies, including nearly total information loss, which highlights the power of the confounding. . **C**: Related obstructions are also present in Dasaratha, Golub, and Hak (2023), which studies learning failures in network structures similar to our generations networks but has no formal results about how learning differs across networks.
BCA
CAB
CAB
CAB
Selection 2
<|MaskedSetence|> In this illustration, we consider two countries whose economic activity depends much more on their domestic demand than on demand by the other country, as is mostly the case empirically. <|MaskedSetence|> <|MaskedSetence|> Intuitively, this is because shocks in one country get transmitted to the other country only by the trade linkage, which is small relative to domestic demand. On the contrary, if the two countries follow a deterministic endogenous business cycle (in this case, a limit cycle), their non-linear dynamics perfectly synchronize. Combining the exogenous shock processes with the endogenous business cycles gives a Pearson correlation of 0.29 between the dynamics of the two countries, in line with empirical observations (Section 5.1). .
**A**: The economic intuition for why synchronization can help generate higher comovement is given in Figure 1. **B**: As we can see, the correlation of the macroeconomic dynamics of the two countries is virtually zero. **C**: Under exogenous business cycles, each country is hit by an idiosyncratic shock process and produces a business cycle independently of the other country.
ACB
ACB
ACB
ACB
Selection 1
This paper is organized as follows. <|MaskedSetence|> <|MaskedSetence|> Section 4 introduces Wiener-Hopf factorization. <|MaskedSetence|> Section 6 introduces regularized solutions. Section 7 provides an application of the theory. Section 8 concludes. Sections A-D comprise the Appendix. .
**A**: Section 5 sets up the LREM problem, its existence and uniqueness properties, and establishes its ill-posedness. **B**: Section 3 considers the solution of simple LREMs with elementary frequency domain methods. **C**: Section 2 sets up the notation and reviews the fundamental concepts of spectral analysis of time series.
CBA
CAB
CBA
CBA
Selection 1
<|MaskedSetence|> They replaced the so-called Boston mechanism that used the first-preference-first principle for each school, arguing that, due to this principle ‘‘high-scoring kids were being rejected simply because of the order in which they listed their [schools] preferences’’ (Pathak and Sönmez,, 2013). The same Boston mechanism has also been used for college admissions in several provinces in China, and it raised similar complaints. For example, one parent said: ‘‘My child has been among the best students in his school and school district. He achieved a score of 632 in the college entrance exam last year. Unfortunately, he was not accepted by his first choice. After his first choice rejected him, his second and third choices were already full. My child had no choice but to repeat his senior year’’ (Chen and Kesten,, 2017; Nie,, 2007). In 2003, more than 3 million students, representing half of the annual intake, were matched to significantly worse colleges than what their grades allowed (Wu and Zhong,, 2020). These examples illustrate unfairness concerns with the old mechanisms: they can induce a matching with a so-called blocking student, that is, a student who missed a school while at least one seat at that school has been assigned to a student with a lower grade or priority or even left empty. The blocking student is entitled to this seat, yet she has not been assigned to it. <|MaskedSetence|> A matching with no blocking student is stable and is viewed as a fair outcome as it eliminates “justified envy”, a situation in which a student prefers a school that is assigned to another student with lower admission priority (Abdulkadiroğlu and Sönmez,, 2003).111In general, the relation between stability and fairness is more nuanced, see Romm et al., (2020). Gale and Shapley, (1962) show that for any instance there is a student-optimal stable matching, a matching that every student finds at least as good as any other stable matching. <|MaskedSetence|> We refer to it as Gale-Shapley mechanism..
**A**: It is important to note that we define the concept of fairness concerning true preferences and not reported preferences. **B**: In 2009, Chicago education authorities reformed their Selective High School admission system. **C**: This stable matching can be reached by the student-proposing deferred acceptance algorithm by Gale and Shapley, (1962).
BAC
CAB
BAC
BAC
Selection 4
<|MaskedSetence|> <|MaskedSetence|> (2019) is a follow-up study of Angrist and Lavy (1999) that exploits the Maimonides’ rule in the Israeli school system: a school with 41414141 students must have two classes of 20202020 and 21212121. The treatment is the assignment to a smaller class, but schools may have manipulated the enrollment as “schools are warned not to move students between grades or to enroll those overseas so as to produce an additional class” (Angrist et al., 2019) by the Israeli Ministry of Education (MOE). Such manipulation is costly and should occur only to exceed the cutoff because “School leaders might care to do this because educators and parents prefer smaller classes. MOE rules that set school budgets as an increasing function of the number of classes also reward manipulation” (Angrist et al., 2019). <|MaskedSetence|> Otsu.
**A**: Hence, the density test is valid for their design. **B**: Many designs should be able to detect manipulation. **C**: For example, Angrist et al.
BCA
BCA
CAB
BCA
Selection 1
The MDep provides a fundamentally different and practically useful approach to dealing with (1) the unavailability of excluded instruments, (2) the weak-IV problem, and (3) the non-existence or contamination of the disturbance term by outliers. Consistent estimation is feasible without excludability provided endogenous covariates are non-linearly dependent (in the distributional sense) on exogenous covariates. The MDep is able to handle the weak IV problem by admitting instruments that may be uncorrelated or mean-independent but non-independent of endogenous covariates. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Future work could extend the estimator in several directions, such as incorporating clustering and accommodating temporal dependence in the observations..
**A**: Additionally, no moments for the disturbance term are required, and bounded one-to-one mappings of Z𝑍Zitalic_Z eliminate the need for moment bounds on the instruments. Consistency and asymptotic normality of the MDep estimator hold under mild conditions. **B**: This paper is a first step in developing a distance covariance-based estimator for econometric models under possible endogeneity. **C**: Illustrative examples backed by simulations showcase the gains of the MDep estimator vis-à-vis existing methods.
CAB
ACB
ACB
ACB
Selection 2
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> This impossibility result provides theoretical characterization of constrained settings that need to be ruled out for any statistical rule to achieve uniformity. Second, I show the direct extension of the existing EWM statistical rule is not appealing in the setting with unknown cost. The reason is that this sample-analog rule ignores the estimation error in the estimated budget constraint, which has non-negligible consequences even when the sample size is large. For data distributions mentioned earlier where the budget constraint is exactly binding, the welfare loss does not vanish with sample size. The probability of selecting infeasible policies also does not vanish with sample size, but a simple modification to the sample-analog rule can ensure that feasible policies are selected with high probability. These considerations are relevant in real-world settings, as demonstrated by a calibrated simulation study. .
**A**: A notable example within this class is the expansion of welfare programs, such as tax credits to incentivize labor force participation, which can “pay for themselves,” as demonstrated in Hendren and Sprung-Keyser (2020), because they have zero net cost to the government. **B**: Firstly as a novel theoretical contribution, I quantify a class of reasonable data distributions that is particularly challenging for statistical rules. **C**: Specifically, no statistical rule can be uniformly welfare-efficient and feasible simultaneously over this class of data distributions.
BCA
BCA
BCA
CAB
Selection 1
<|MaskedSetence|> <|MaskedSetence|> In particular, they show that, once the previously mentioned variables are controlled for, there is no dependence between the distance to the nearest secondary school and whether the individual ever failed a grade in elementary school, how many times he repeated a grade in elementary school, and whether he had to work while attending elementary school. In addition, they show (using a different sample) that the distance variable is unrelated to test scores (Math, Bahasa, Science, and Social Studies) in elementary school. <|MaskedSetence|> For this reason, this exercise should be seen as illustrative. Estimation Results.
**A**: The instrumental variable (Z)𝑍(Z)( italic_Z ) for schooling is the distance (in kilometers) from the office of the community head to the nearest secondary school. **B**: The main assumption from Carneiro, Lokshin, and Umapathi (2017) is that if we consider two individuals with equally educated parents, with the same religion, living in a village which is located in an area that is equally rural, in the same province, and at the same distance of a health post, then distance to the nearest secondary school is uncorrelated with direct determinants of wages other than schooling. The authors present evidence that this assumption is likely to hold, suggesting that the IV is valid. **C**: However, the validity of the distance to the nearest secondary school instrument remains highly questionable.
CAB
ABC
ABC
ABC
Selection 2
<|MaskedSetence|> Crucially, this bound depends only on the decision problem and does not depend on the number of information sources or their specific details. Thus, the bound specifies a limit to the addition of relevant information sources — at some point, any new relevant information source introduced will necessarily make at least one existing source redundant. Aside from providing normative guidelines for constructing robust strategies, our findings offer an alternative explanation for the ignorance of information, which has distinct implications compared to existing explanations. For example, in models of rational inattention (see Maćkowiak, Matějka, and Wiederholt (2023) for a survey), more information is acquired and used when stakes are raised. <|MaskedSetence|> <|MaskedSetence|> (2020) found in a study that only 16.1% percent of patients sought a second opinion about their cancer treatment. .
**A**: With more than two states, we do not have a closed-form characterization of the optimal strategy, but we establish a bound on the number of information sources that are used. **B**: This distinction may help explain the ignorance of information even in high-stakes decision problems.141414For example, Olver et al. **C**: In contrast, in our model, multiplying the utility function by any constant does not alter the set of information sources attended to.
ACB
ACB
ACB
ACB
Selection 2
<|MaskedSetence|> The random attention rule is the probability distribution on all possible consideration sets. <|MaskedSetence|> <|MaskedSetence|> For preferences, we assume that the decision makers have a complete and transitive (initially homogeneous, later heterogeneous) preferences over the alternatives, and that they pick the best alternative in their consideration sets. In this general setting where attention is random and limited, and products compete for attention, we aim to elicit compatible preference orderings and attention frequencies solely from observed choices. Existing random attention models cannot capture, or are incompatible with, attention overload. For example, Manzini and Mariotti (2014) consider a parametric attention model with independent consideration where each alternative has a constant attention frequency even when there are more alternatives, and therefore their model does not allow decision makers to be more attentive in smaller decision problems. Aguiar (2017) also share the same feature of constant attention frequency. On the other hand, recent research has tried to incorporate menu-dependent attention frequency (Demirkan and Kimya 2020) under the framework of independent consideration. This model is so general that it allows for the opposite behavior than attention overload (i.e., being more attentive in larger choice sets). The recent models of Brady and Rehbeck (2016) and Cattaneo, Ma, Masatlioglu, and.
**A**: Our baseline choice model has two components: a random attention rule and a homogeneous preference ordering, but we later enhance our model to allow for heterogeneous preferences. **B**: Attention overload then implies that the attention frequency should not increase as the choice set expands. **C**: To introduce our attention overload assumption formally, we define the amount of attention a product receives as the frequency it enters the consideration set, termed Attention Frequency.
ACB
ACB
ACB
ACB
Selection 2
<|MaskedSetence|> <|MaskedSetence|> When the data do not satisfy the exact stability conditions, it is interesting to evaluate the degree of violation as the data may not be exactly rationalizable but close to satisfying the exact conditions.161616 In reality, household consumption behavior may not be exactly consistent with the model if, for example, the data contain measurement errors, there are frictions in the marriage market, or other factors, such as match quality, affect marital behavior. If the data are close to satisfying the exact conditions, we may want to include such data in the empirical analysis. The procedure we use can be applied to model such almost rationalizable behavior. To this end, we follow Cherchye et al., 2017 by evaluating the goodness-of-fit of a model by introducing stability indices. <|MaskedSetence|>
**A**: These stability indices allow us to quantify the degree to which the data is consistent with the rationalizable behavior. . **B**: The stability conditions in Definitions 4 and 5 can be checked by simple linear programming techniques (see Appendix B). **C**: This defines sharp tests for rationalizable consumption behavior: either the data satisfy the stability conditions or they do not.
BCA
BCA
BCA
BCA
Selection 3
<|MaskedSetence|> It requires that the distribution of the recommendations (e.g., the fraction of the population with a given level of credit score) has to be the same across sub-populations with different values of protected attributes.444The requirement of statistical parity is close in spirit to the concept of disparate impact from the US law, which forbids substantial statistical differences in outcomes for groups with different values of protected attributes. <|MaskedSetence|> Thus, our results on optimal privacy-preserving recommendation provide a characterization of optimal recommendation systems satisfying statistical parity. Moreover, our framework allows one to quantify the efficiency loss due to the fairness constraint. A popular notion of differential privacy from computer science formalizes privacy in another context; see (Dwork and Roth, 2014) for a survey. Each member of a population already possesses private information, and a social planner aims to compute an objective depending on this information. Differential privacy ensures that this computation does not expose individual private information too much. <|MaskedSetence|> By contrast, our paper introduces an exact “horizontal” privacy notion, capturing a social planner who possesses information and aims to distribute it in a private way across the population. Concepts resembling differential privacy appeared earlier in the economic literature in the context of exchange economies (Gul and Postlewaite, 1992) and have been recently brought to mechanism design (Eilat, Eliaz, and Mu, 2021) and information design (Schmutte and Yoder, 2022)..
**A**: So, differential privacy is an approximate “vertical” privacy notion. **B**: In other words, the recommendation and the collection of protected attributes have to be independent of each other, forming a private private information structure. **C**: Our privacy requirement is equivalent to the fairness constraint of statistical parity, one of the most popular fairness principles (Kleinberg et al., 2016, Aswani and Olfat, 2022, Dwork et al., 2012).
CBA
CBA
CBA
CBA
Selection 2
<|MaskedSetence|> In principle, it is possible to shift and turn the regression hyperplane at every point in time using appropriate indicator variables. Finding the true structural breaks corresponds to selecting relevant indicators and eliminating irrelevant indicators thereby optimizing the fit under sparsity. This leads to a high-dimensional penalized regression model with the total number of parameters of the model close to the number of observations. <|MaskedSetence|> Their objective function includes a penalty for nonzero parameters and a tuning parameter controls the sparsity of the selected model. However, quite restrictive regularity conditions about the design matrix (restricted eigenvalue condition (Bickel et al., 2009) or strong irrepresentable condition (Zhao and Yu, 2006)) need to be imposed to ensure simultaneous variable selection and parameter estimation consistency. <|MaskedSetence|> Consequently, the conventional LASSO-type estimators need to be improved to both estimate and select the true model consistently. Harchaoui and Lévy-Leduc (2010) are among the first to use penalized regression methods to detect structural breaks. They focus on a piecewise constant white noise process and detect structural breaks using a total variation penalty. Bleakley and Vert (2011) use the group fused LASSO for detection of piecewise constant signals and Chan et al. (2014) develop the aforementioned two-step method. Ciuperca (2014), Jin et al. (2016) and Qian and Su (2016) consider LASSO-type estimators for the detection of multiple structural breaks in linear regressions. Behrendt and Schweikert (2021) propose an alternative strategy to eliminate superfluous breakpoints identified by the group LASSO estimator. They suggest a second step adaptive group LASSO which performs comparably to the backward elimination algorithm suggested in Chan et al. (2014). Schmidt and Schweikert (2022) consider cointegration tests in the presence of structural breaks in the long-run relationship and estimate those breaks with an adaptive LASSO estimator. Schweikert (2022) uses the adaptive group LASSO estimator to estimate structural breaks in single-equation cointegrating regressions. The estimator developed in this paper can be understood as an extension of the Schweikert (2022) approach to multiple equation cointegrated systems, so that structural change of more than one equilibrium relationship can be modelled..
**A**: LASSO-type estimators, introduced by Tibshirani (1996), have attractive properties in high-dimensional settings with a sparse model structure. **B**: The idea to perceive the change-point problem in linear regressions as a model selection problem has spawned a diverse literature (see, for example, Harchaoui and Lévy-Leduc, 2010; Bleakley and Vert, 2011; Chan et al., 2014; Safikhani and Shojaie, 2022; Schweikert, 2022). **C**: Unfortunately, these conditions are usually violated in change-point settings, where adjacent columns of the design matrix differ only by one entry and the design matrix is highly collinear if the sample size grows large.
BAC
BAC
CAB
BAC
Selection 4
<|MaskedSetence|> <|MaskedSetence|> It refers to the practice that a large donor (e.g., an employer or a charitable foundation) matches individuals’ contributions to a specific cause to increase the gift. Eckel and Grossman showed in both within-subject and between-subject lab experiments that a donation match is significantly more effective than a rebate (returning a portion of the donation to the giver) in fundraising performance (Eckel and Grossman, 2006, 2003). Gandullia and Lezzi (2018) performed online experiments to show similar findings that donation matching is more effective than rebates. <|MaskedSetence|> The studies that compared donation match with rebates controlled for the value of the money added to the gift or returned to the donor. In our context, the value of crypto rewards is ambiguous and key to the relative advantage of these two fundraising strategies..
**A**: Kamas and Preston (2010) showed that donation matching is effective even for self-interested donors (as compared to social surplus maximizers and inequity averters). **B**: Donation matching has been identified as one of the most effective means of fundraising. **C**: 2.2 Donation Matching The next hypothesis is designed to enhance the understanding of crypto rewards by drawing a comparison with the fundraising strategy of donation matching.
CBA
CBA
CBA
ACB
Selection 1
When is there a trade-off between efficiency and envyfreeness or, equivalently, when is there a trade-off between preferences and priorities? Our results confirm the conjecture put forward by Pathak (2017) according to which “correlation between preferences and priorities induced by proximity may, in turn, result in less scope for Pareto-improving trades across priority groups that involve situations of justified envy. <|MaskedSetence|> Specifically, our generalized mutually best pairs condition maximally captures the set of environments where priorities and preferences are sufficiently congruent that DA is efficient, and there is no trade-off between efficiency and envyfreeness. <|MaskedSetence|> A small trade-off was found in (primarily) elementary school markets with priorities to siblings, staff, and some measure of distance (Boston, Ghent, New Orleans). The elementary school level is exactly the education level where one would also expect parents to place greater emphasis on proximity or selecting the same school as the older sibling in order to facilitate home logistics when children are not yet capable of travelling by themselves.161616Most empirical studies of parental preferences restrict attention to one educational level and so there is little evidence for this comparative statement. <|MaskedSetence|>
**A**: This pattern may then result in a small degree of inefficiency in DA”, and clarify to which extent the conjecture holds. **B**: Our condition is more general than Pathak (2017)’s correlation conjecture between preferences and priorities because it restricts attention to those parts of preferences and priorities that are relevant for an envyfree allocation. Our results shed light on the empirical evidence presented in Table 1. **C**: Harris and.
ABC
ABC
ACB
ABC
Selection 1
<|MaskedSetence|> Arrow (1971, 1998) argues that discrimination may arise as the result of coordination failure. One demographic collective, call it Group 1, expects to be discriminated against, and therefore does not undertake the costly investments that are needed to succeed in the labor market. <|MaskedSetence|> <|MaskedSetence|> Such a discriminatory equilibrium is, typically, Pareto dominated by an impartial equilibrium where employers hold uniformly positive beliefs about all groups, and the latter all invest. The second canonical model follows Phelps (1972) (see also.
**A**: Employers, in turn, rationally favor Group 2 over Group 1 because the former is expected to invest and the latter is not. **B**: Economists have proposed two canonical models of statistical discrimination: the Arrovian model of coordination failure, and the Phelpsian model of information heterogeneity. **C**: Group 2 expects to be favored and therefore invests.
ACB
BCA
BCA
BCA
Selection 4
Remark 3.1. While strong sign preservation may be perceived as a key requirement for parameters that intend to identify partial causal effects, it may not be a reasonable requirement in settings where the counter-factual question of interest involves total effects, as introduced and discussed in the next section. The distinctions between “partial” and “total” causal effects have appeared in the literature in a variety of contexts, even beyond the mediation analysis literature discussed in Remark 2.1, where Pearl (2001) and Robins (2003) provide comprehensive treatments on these distinctions. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>   .
**A**: Manski (1997, page 1321 and 1323), in turn, provides two interpretations of potential outcomes (one that keeps other actions fixed and another one that lets the other actions change in response to the main action) and clarifies that the interpretation of treatment effects depends on how we think about potential outcomes. **B**: For example, Heckman (2000) defines a causal effect as a partial derivative and states that while the assumption that an isolated action can be varied independently of others is strong but “…essential to the definition of a causal parameter”. **C**: Here, we do not dwell on discussions about the relative merits of partial or total effects but rather seek to understand whether commonly used estimands in empirical work admit either of these (commonly sought after) interpretations under different assumptions.
CBA
BAC
BAC
BAC
Selection 4
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Later it has been widely suggested that the setup suffers from a weak instrument problem (Angrist and Krueger 1995; Bound et al. 1995). MS2022 applied their proposed pre-test and argued the instrument set is strong with the original full data. .
**A**: At the time of publication, the issue of weak instruments had received little attention. **B**: 5 An empirical illustration: Return to education In this section, we re-analyse the returns to education data of Angrist and Krueger (1991) (henceforth referred to as AK1991) using quarter of birth as an instrument for educational attainment, and construct confidence intervals for the strength of instruments. **C**: One of the specifications in the original AK1991 uses up to 180 instruments that include 30 quarter and year of birth interactions and 150 quarter and state of birth interactions.
BCA
BCA
BCA
BCA
Selection 1
The analysis spans from fiscal years between 2005 and 2020. <|MaskedSetence|> In contrast, the COVID-19 crisis has hit counter-cyclical sectors (e.g., in-person services) sharply (Piłatowska and Witkowska, , 2022). We focus on working-age (16-64) employees, i.e., people who are in employment and paid a wage by an employer for their work.151515We excluded self-employed workers (i.e., people who, in their main employment, work on their account whether or not they have employees) from the analysis because the earnings questions are not addressed to respondents who are self-employed in the UK LFS. <|MaskedSetence|> <|MaskedSetence|> Our final sample consists of 334,055 female workers and 307,245 male workers. The dataset includes variables on a wide range of (i) demographic characteristics (gender, age, country of birth, nationality, ethnicity, religion); (ii) socio-economic factors (presence of dependent children, marital status, education, experience, full/part-time job, remote work, public sector, training opportunities, sectors and occupations); (iii) geographical information on residence and working region..
**A**: Dustmann et al., , 2010; Dustmann and Frattini, , 2014). **B**: This period covers widespread enforcement of equality legislation, the 2007-2008 financial and economic crisis, and the recent changes caused by the COVID-19 outbreak.141414Most of the literature shows that the 2007-2008 crisis had a severe impact on male-dominated sectors, such as on construction and manufacturing (Hoynes et al., , 2012; Périvier, , 2014; Doepke and Tertilt, , 2016). **C**: This practice is standard in this type of analysis (see, e.g.
BCA
BCA
BCA
BCA
Selection 3
We make use of revision operators to represent knowledge revision. <|MaskedSetence|> <|MaskedSetence|> Before formally introducing revision types in Subsection 4.2, we identify all feasible outcomes of knowledge revision in Subsection 4.1. <|MaskedSetence|> In Subsection 4.4 we compare revision types by defining a partial order over them. .
**A**: Different agents may use different revision operators depending on how introspective they are. **B**: In Subsection 4.3, we provide a full characterization of knowledge revision in terms of possibility relations. **C**: An agent’s revision type tells us which operators the agent uses as well as the order in which she employs them.
ACB
CAB
ACB
ACB
Selection 4
<|MaskedSetence|> The seminal work of Berk (1966) provides conditions under which the limiting posterior beliefs concentrate on the model that minimizes the Kullback-Leibler divergence from the truth. Economic models of misspecified Bayesian learning can be broadly categorized into passive and active learning models. The former, including ours, assume that the actions taken by the DM do not affect the signals generated by the underlying truth. Our contributions to passive misspecified learning are twofold. First, we propose a model of making predictions using familiar and unfamiliar information sources applicable to broad phenomena. Second, we obtain general characterizations of the long-run outcomes using fundamental principles from statistical decision making. The most related work to ours is Heidhues et al. (2019) (hereinafter, HKS), which examines a passive learning model with a single source of misspecification and demonstrates that Berk’s convergence result applies in multidimensional Gaussian environments. <|MaskedSetence|> The main difference between HKS and the current work lies in the DM’s decision problem. HKS examine how the DM’s overconfidence in his caliber distorts his estimates about ingroups and outgroups — a problem akin to what our DM faces in the first stage. <|MaskedSetence|> This second stage, absent from HKS, generates new insights such as DOOM that have eluded previous researchers..
**A**: However, our DM uses multiple misspecified sources to guide his learning of unfamiliar sources and, crucially, uses the learned model to predict a random state. **B**: The study of misspecified Bayesian learning models has a long tradition in statistical decision theory. **C**: Our convergence proof builds on their analysis.
BCA
BCA
ACB
BCA
Selection 4
<|MaskedSetence|> We showed that optimal algorithm recommendations must strike a balance between increasing the trade surplus and inducing low prices. The optimal algorithmic consumption drastically changes the predictions of third-degree price discrimination, whereby finer market segmentations by the sellers do not affect the total consumer surplus or seller profits but result in larger price spreads and more equitable surplus distribution. We view our work as a stepping stone toward a better understanding of algorithmic design in strategic settings with incomplete information. First, our model of algorithms is deliberately stylized to analyze strategic motives in a clear and tractable way. <|MaskedSetence|> Second, it would be interesting to study market structures for algorithm providers and understand which of the algorithms that we characterize are favored by one or another market structure. Third, the developed ideas of algorithmic decisions can be exported beyond consumption settings, such as to algorithmic matching or algorithmic negotiations. <|MaskedSetence|>
**A**: A practical implementation would ideally incorporate many engineering concerns from which we abstracted away, such as value estimation details, computational complexity, and robustness. **B**: All this further research can be built upon the analytical framework proposed in this paper.. **C**: In this paper, we studied the question of optimal algorithmic consumption in the presence of strategic pricing.
ABC
CAB
CAB
CAB
Selection 4
Let us now proceed to investigate non-linearity. <|MaskedSetence|> 4), but it is hard to quantify and would be somewhat meaningless for a large variation in the feature of interest. <|MaskedSetence|> <|MaskedSetence|> The average non-linearity measure (ANLM) is 0.36, indicating that the linear secant, on average, is a bad descriptor of the FME. .
**A**: Non-linearity can be visually assessed for ICE curves (see Fig. **B**: Furthermore, for bivariate or higher-dimensional changes in feature values, we lose any option for visual diagnoses of non-linearity. **C**: In contrast, the NLM can be computed for FMEs with continuous step sizes, regardless of dimensionality.
ABC
ABC
ABC
ABC
Selection 4
<|MaskedSetence|> As histories only include messages sent by the agent, we implicitly rule out the use of randomization (as in, for instance, Deb et al., (2018)). Deterministic mechanisms have significant practical appeal, as it is not always obvious what a randomization might correspond to and how a mechanism designer might implement this. <|MaskedSetence|> Now, randomization provides no additional benefit in implementing maximum effort with probability 1. But it is natural to ask whether some designer objectives may yield benefits to randomization. <|MaskedSetence|>
**A**: 6.1 Randomized Contracts Our goal has been to determine the maximum effort implementable within the class of contracts defined in Section 2.2. **B**: Our analysis speaks to this question as well. . **C**: These issues have been discussed extensively in the contracting literature; we refer the reader to discussions in Laffont and Martimort, (2002) as well as Bester and Strausz, (2001) on this point to avoid detours.
ACB
ACB
ACB
BAC
Selection 1
<|MaskedSetence|> They focus on the existence of stable partitions but they do not analyze their exact structure. To make more precise predictions about the possible size and composition of stable partitions, we restrict attention to what we call θ𝜃\thetaitalic_θ-minimal proportional generalized claims problems where coalitions of size smaller than θ𝜃\thetaitalic_θ receive zero endowments and all remaining coalitional endowments are a fixed proportion of the sum of the claims of coalition members. Note that the concept of proportionality implicitly assumes that claims are objectively determined or verifiable. If this was not the case, given the proportionality assumption for coalitional endowments, agents would have strong incentives to overstate their claims in order to receive larger coalitional endowments and payoffs. <|MaskedSetence|> <|MaskedSetence|> The exception for fixed coalitional endowments is the CEA rule when agents’ preferences are single-peaked (in this context, the CEA rule is also referred to as the uniform rule, see Appendix G). However, when coalitional endowments are proportional to coalitional claims / peaks, overstating claims / peaks can be a strictly dominant strategy for the CEA rule as well. are not satisfied by our algorithms..
**A**: Note that even if coalitional endowments are fixed, most resource monotonic rules are not strategy-proof. **B**: Therefore, strategic properties, e.g., strategy-proofness,131313Strategy-proofness requires that for each agent, truthful claim reporting is a weakly dominant strategy in the associated direct revelation game. **C**: 6 Conclusion In this paper, we continue the analysis of Gallo and Inarra’s (2018) coalition formation problems with claims.
ACB
CBA
CBA
CBA
Selection 3
<|MaskedSetence|> The finer details in the background of this survey design are summarized in Online Appendix B.1. The average THBS head’s age is 33333333, which is indeed in the skilled workers’ representative age range in the machinery factories (Kitazawa 1924).373737Specific occupations are not available for most heads because they are usually listed as “factory workers” in the occupation section. <|MaskedSetence|> Online Appendix A.6 summarizes the work in the machinery factories in Tsukishima. Column 1 of Panel C in Table 2 indicates that the average monthly earning of the THBS heads is 59595959 yen (median = 56565656 yen). In Column 2, I calculated the average monthly earning of male machinery factory workers in the skilled-age range using the manufacturing censuses, which shows similar monthly earnings (56565656 yen).383838Shortly, this figure is calculated using the average daily wage of male factory workers aged 30–40 and average annual working days measured in the manufacturing censuses. <|MaskedSetence|> The average wages for the male factory workers had an inverted-U-shaped distribution with respect to age, taking the greatest figure in the 30s (Online Appendix Figure B.7). This is a similar life-cycle pattern to the factory workers observed in late 19th century England (Horrell and Oxley 2000, p. 42)..
**A**: Approximately six of those skilled workers were employed in the middle- to large-scale factories, and three worked in small-scale factories.353535Online Appendix A.5 summarizes this point in detail using the complete survey of the primary school students in the 4th to 6th grades in Tsukishima. Given this, the THBS was designed to investigate the budgets in skilled workers’ households.363636Shortly, Iwasaburo Takano, a survey director, regarded that the skilled workers in the modern industrial sector constituted the core of the labor force, labor movement, and citizens. **B**: Online Appendix B.6 provides the finer details of the calculation steps. **C**: However, sub-categories available for some heads indicate that they were the typical skilled workers in machinery factories such as lathe operators and finishers.
ACB
BCA
ACB
ACB
Selection 4
The ‘do’ operator is forcing B=b𝐵𝑏B=bitalic_B = italic_b by intervention. <|MaskedSetence|> [1] is provided in Figure 1(a). Eqn. <|MaskedSetence|> <|MaskedSetence|> Moreover, Popper (1957) in his writings on social sciences refers to ‘self-limiting’ predictions influencing the predicted event in a preventative sense. Markets are similarly understood to adapt as the outputs of scrutiny are assimilated. This understanding is accommodated within the idealized neoclassic theory of economics. .
**A**: [1] and its associated framework accommodates both neoclassical equilibria and heterodox dynamics, depending on the size and term of the arbitrageable phenomenon under consideration. **B**: A diagrammatic representation of Eqn. **C**: Mathematician Robert Buck defined ‘self-frustrating’ predictions as forecasts that are initially true, but become false on public dissemination (Buck, 1963).
BAC
BAC
BAC
CBA
Selection 3
The present article aims to fill a gap in the existing literature, which does not, in general, study settings with competition on the algorithms. A notable exception is Sanchez-Cartas and Katsamakas (2021), who compare Q-learning with Particle Swarm Optimization (PSO). <|MaskedSetence|> Given a function to minimize (without knowing its gradient), PSO simulates the evolution of candidate solutions (seen as particles in a swarm). The particles move in the solution space depending on their own best known solution and the best known solution of the swarm, and oftentimes manage to collectively find optima. Their simulations indicate that, when PSO is competing with a stateless Q-learning algorithm, both set supracompetitive prices, however they do not consider the different technologies as being chosen in order to compete between each other. To the best of our knowledge, only Compte (2023) [12] integrates equilibrium considerations. In this paper, a variation of Q-learning integrating a possible bias towards cooperation is considered. Biases are chosen simultaneously by players before the algorithms start running. <|MaskedSetence|> Finally, recent work by Dolgopolov (2024) [15] and Xu and Zhao (2024) [34] have provided further insight into the behavior of interacting Q-learning algorithms. Dolgopolov (2024) provides a full characterization of stochastically stable states of Q-learning algorithms repeatedly playing a prisoner’s dilemma. He proves that under ε𝜀\varepsilonitalic_ε-greedy policies, the only stochastically stable outcome is for both algorithms to play defection, while depending on the payoffs and the learning rate, logit exploration allows for some cooperative behavior. <|MaskedSetence|> They then give insight on why there is a difference between what stochastic stability allows and the empirical behavior of Q-learning algorithms using numerical simulations. 3 Setting and notations.
**A**: In a similar way, Xu and Zhao (2024) prove that, for a wider class of games including prisoner’s dilemma, Bertrand competition as well as first and second price auctions, in stochastically stable outcomes algorithms learn to play the strict Nash equilibrium of the game. **B**: PSO (Kennedy and Eberhart 1995 [21]) is a meta-heuristic method of optimization belonging to the class of evolutionary algorithms. **C**: Simulations using a prisoners’ dilemma indicate that Nash equilibria featuring positive bias towards cooperation exist and enhance collusive behavior.
BCA
ABC
BCA
BCA
Selection 1
The remainder of the paper is structured as follows. Section 2 provides an overview of the literature and places our novel contribution within it. <|MaskedSetence|> Section 4 introduces the two approaches we take to handle the fixed-effects problem. Section 5 formally introduces the DML estimation procedures. <|MaskedSetence|> <|MaskedSetence|>
**A**: Finally, Section 7 illustrates an empirical application of the procedure and we make concluding remarks in Section 8. . **B**: Section 3 motivates the partially linear panel regression model and the causal assumptions which must hold to ensure the target parameter can be interpreted as a causal effect. **C**: Section 6 briefly discusses the Monte Carlo simulation results.
BCA
BCA
ABC
BCA
Selection 1
<|MaskedSetence|> There are three main steps to our approach: first, we obtain an estimate of the relative demand of travelers traveling between different zip-codes in the Bay Area by using the Safegraph dataset. <|MaskedSetence|> After accounting for the sampling bias induced due to the randomly sampled population across the United States, we calibrate demands by using the ACS dataset which provides the income-stratified driving population in every zip code. <|MaskedSetence|> The details of demand estimation are included in (Maheshwari et al.,, 2024). Calibrating the edge latency functions.
**A**: Particularly, for every month, the Neighborhood Patterns data in the Safegraph dataset provides the average daily count of mobile devices that travel between different census block groups (CBGs) during the work day, which is then aggregated to obtain the relative demand of travelers traveling between different zip codes. **B**: Next, we describe the approach used to compute the daily demand of different types of travelers traveling between different o-d pairs during January 2019-June 2019. **C**: Finally, to obtain an estimate of daily variability in demand we further augment the demand data with the PeMS dataset by adjusting for daily variation in the total flow on the network in every month.
BCA
BAC
BAC
BAC
Selection 3
We again compare the performance of the optimal SS-SSD portfolios with that of the MAXSER portfolios of Ao, Li, and Zheng (2019). The upper panel of Figure 2 plots the number of stocks of the optimal SS-SSD portfolios through time that eliminate the diversification loss, as well as the number of stocks of the efficient MAXSER portfolio. The lower panel plots the estimated expected loss of the optimal MAXSER corresponding to the inefficient SSD portfolios with the same number of stocks as MAXSER. <|MaskedSetence|> On a rolling-window basis, the number of assets in the SS-SSD portfolios is always higher than in the MAXSER portfolios. <|MaskedSetence|> <|MaskedSetence|>
**A**: It shrinks to around 25 assets in the crisis periods of 2008-2009 and at the beginning of the Covid-19 period. **B**: The diversification loss is zero for the efficient SS-SSD portfolios corresponding to the upper panel by construction. **C**: Otherwise the number of assets in the SS-SSD portfolios is stable between 30 and 35. The number of assets in the MAXSER portfolios is more volatile. .
BAC
CBA
BAC
BAC
Selection 1
<|MaskedSetence|> <|MaskedSetence|> These three random variables play an important role within the risk management strategy of an insurance company. For instance, risk measures such as the Value-at-Risk and the Tail-Value-at-Risk have a close link with the deficit at ruin, while from a monitoring perspective, the surplus prior to ruin could be thought of as an early warning signal for the insurance company. <|MaskedSetence|> Extensive literature on these variables exists for the Cramér-Lundberg model and its variations (see, for example, Gerber and Shiu (1997), Gerber and.
**A**: Apart from facilitating the study of the infinite-time trapping probability, classical risk theory provides additional tools that allow the examination of other quantities of interest, such as a household’s income short-fall at the trapping time (the time at which a household’s capital falls into the area of poverty), thus granting a much deeper understanding of a household’s transition into poverty. **B**: The (ruin) time at which any such event takes place is then of critical importance (Landriault and Willmot, 2009). **C**: In particular, the Gerber-Shiu expected discounted penalty function, which was originally introduced by Gerber and Shiu (1998), gives information about three quantities: the time of ruin, the deficit at ruin, and the surplus prior to ruin, corresponding to the first time an insurer’s surplus becomes negative, the undershoot and the overshoot of the insurer’s surplus at ruin, respectively.
BAC
ACB
ACB
ACB
Selection 3
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> [17] consider multi-stage stochastic programming for an uncapacitated remanufacturing environment. Among other parameters, customer demand is assumed to be uncertain. Scenario trees are used to represent this uncertainty and demands are sampled from a uniform distribution. The authors develop a branch and cut algorithm and apply the proposed multi-stage stochastic model in a rolling horizon manner. Capacity restrictions are not considered, even though they constitute a major limitation in real-world environments. For this reason we investigate a capacitated setting and analyze the effect of different resource utilizations..
**A**: The integrated lot sizing and cutting stock problem under demand uncertainty is studied in [16]. **B**: However, including short term decisions within the medium term planning model leads to a strong increase in model size and makes it more difficult to obtain a solution quickly. Quezada et al. **C**: The authors apply a robust and two-stage stochastic optimization approach in a rolling horizon manner, in order to adapt the models to the multi-stage stochastic setting.
CAB
ACB
ACB
ACB
Selection 3
Under the widely adopted NEM tariff design, when the community is net-importing, it faces a buy (retail) rate; whereas when it is net-exporting, it faces a sell (export) rate [10, 11]. Energy communities also appeal to nodes without BTM DERs, that can own quotes of a shared central PV or take advantage of other members’ generation capabilities. This paper proposes a novel two-part pricing111A two-part pricing comprises of a volumetric charge, that depends on the prosumer’s net consumption, and a non-volumetric charge, which is a lump sum that does not vary with prosumer’s usage [12]. <|MaskedSetence|> <|MaskedSetence|> Second, the mechanism is OEs-aware, as it induces a collective response that does not violate the OEs on the community’s aggregate net consumption. <|MaskedSetence|> Fourth, the market mechanism obeys the cost-causation principle which, in part, ensures that joining the community is advantageous over autonomously facing the DSO..
**A**: mechanism for energy communities. **B**: First, the pricing mechanism is threshold-based with the energy price being a function of the community’s aggregate supply. **C**: Third, the mechanism aligns each member’s incentive with that of the community such that each member maximizing its individual surplus results in maximum community welfare.
ABC
ABC
BCA
ABC
Selection 4
<|MaskedSetence|> Seen from Table. 4, WIAE-GPF obtained the best sMAPE, CRPS, and CPE(90%), and came close in the second place for the rest. <|MaskedSetence|> DeepVAR had the best NMAE, MASE and CPE(50%), proving its capability of accurately estimating conditional distribution around the median. Fig. 6 (top row) showed the trajectory of winter LMP predictions. DeepVAR exhibited larger variability when predicting LMP of January 2024, which explains its higher level of NMSE. Pyraformer and WIAE-GPF achieved similar level of variance when predicting winter LMP. <|MaskedSetence|>
**A**: January 2024 has the most volatile LMP among the four months tested. **B**: Pyraformer, with its MSE-minimizing training objective, achieved the best NMSE at around 7%percent77\%7 %. **C**: TLAE had the worst ground truth-tracking performance among the four trajectories. .
ABC
ABC
ABC
ACB
Selection 2
The latter paper proves identification results for friendship formation and peer effect measurement, which this paper does not cover. However, to obtain the theoretical results, Griffith (2024) imposes much more restrictive parametric assumptions for friendship formation than PeerNN. <|MaskedSetence|> In addition, our policy optimization strategy is more systematic. <|MaskedSetence|> The biggest difference between our paper and Griffith (2024) is that we aim to optimize group assignment policy and the network is policy-dependent, in contrast, the latter paper considers treatment assignment (same as papers cited in Section 2.6) and the network is fixed. Other attempts to tackle classroom assignment problems or similar group assignment problems include Bhattacharya (2009), Graham et al. <|MaskedSetence|> (2019), Bizzotto and Vigier (2022) and Sarkisian and Yamashita (2024), we defer the discussion of the difference between these works and ours to Section 2.8 when we compare their choices of optimization method to ours..
**A**: Another advantage of PeerNN is that it does not require linkage data like the latter paper, making our paper more broadly applicable and less ethically concerning. **B**: (2014), Berthelon et al. **C**: Griffith (2024) simulate a few counterfactual scenarios to compare a small number of assignment policies, our paper adapts GA to search for the optimal policy and subsequently considers algorithmic fairness which will be discussed in Section 2.7.
CBA
ACB
ACB
ACB
Selection 4
In this paper, we develop a new estimator for causal change attribution measures, which combines regression and re-weighting methods in a multiply-robust estimating equation. We provide the first results on consistency and asymptotic normality of ML-based estimators in this setting, and discuss how to perform inference. Moreover, our method can be used to estimate Shapley values, which will inherit the consistency and large-sample distribution properties. Finally, we suggest one direction for future research. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: The sensitivity bounds in Chernozhukov et al. **B**: (2022) could be adapted as a way to test the robustness of a causal change attribution study to unobserved confounding. . **C**: The causal interpretation of the change attribution parameters relies on an assumption of no unobserved confounding.
CAB
CAB
CAB
ACB
Selection 1
In a particular case of the general Bayesian MS–VAR process, we also get closed–form density functions of the random components of the model. In particular, we find that joint distribution of future values of the random coefficient matrix is a product of matrix variate student distributions, see equation (2). <|MaskedSetence|> <|MaskedSetence|> Thus, future studies may concentrate on marginal density functions and direct simulation methods for the density function. <|MaskedSetence|>
**A**: Further, we obtain a characteristic function of the random coefficient matrix, which can be used to calculate the forecast of the endogenous variables. . **B**: Hence, one can analyze impulse response by directly generating the coefficient matrices from the distribution functions. **C**: Also, we provide a new density function, which has yet to be introduced before of future values of the endogenous variables; see equations (33) and (43).
BCA
BCA
ACB
BCA
Selection 2
We will start with an analysis where the group attribute in our model can be interpreted as gender. <|MaskedSetence|> <|MaskedSetence|> The left panels show the share of women in the network, as well as the homophily in hiring as well as the resulting homophily in the network. The middle panels show the over/under-representation of women in firms with a certain centrality. The right panels show the perception of the group of women for female/male/all board members. <|MaskedSetence|>
**A**: The first scenario assumes that the initial assignment of seats is unbiased and that the homophily in hiring decreases with group size equalization. **B**: The results are shown as panels A in figure 4. Figure 4: Simulation results for the case of gender and three different scenarios. **C**: A: baseline scenario, B: biased initial seat distribution, C: fixed homophily in hiring..
ABC
BAC
ABC
ABC
Selection 1
Non-copyrighted data We further explored the SRS framework’s response to prompts requesting content generation from non-copyrighted data sources, as shown in Figure 7. In these scenarios, the SRS distribution was observed to be nearly uniform across all copyright owners. This outcome aligns with expectations, as the generated content lacks direct ties to any of the copyrighted data sources. <|MaskedSetence|> <|MaskedSetence|> Notably, all domains exhibit very low or negative Shapley values, with the exception of Wikipedia. This exception can be attributed to Wikipedia’s substantial repository of math-related content, which likely contributes positively to the generation of mathematical text. <|MaskedSetence|>
**A**: Figure 5 presents the Shapley values for generating a corpus in DM Mathematics. **B**: This result underscores the SRS framework’s sensitivity to domain-specific relevance, even within broader, general-purpose datasets.. **C**: This uniformity demonstrates the SRS framework’s ability to avoid disproportionate revenue distribution.
CAB
CAB
CAB
CBA
Selection 1
Similar to the previous experiment, this one was fielded to 1,200 US respondents by Lucid in November 2023, with five pairs to evaluate this time. Table 3 displays the AMCE for gender in both the original and modified experiments. <|MaskedSetence|> <|MaskedSetence|> A key observation is the absence of a clear effect reversal in the reduced-salience setting. We note that this result does not suggest that researchers should assign adjacent levels for all conjoint experiments. The purpose of this design choice is to test the hypotheses derived from the theory. <|MaskedSetence|>
**A**: The broader implication on research design is discussed in the section 4. . **B**: The third column illustrates the effects when we controlled for salience as described. **C**: When attribute levels were assigned randomly—without controlling for salience effects—the corresponding AMCEs, as shown in the second column, experienced a sign reversal after the number of attributes increased from six to eight.
CBA
CBA
CBA
CBA
Selection 4
Hu (Hu, 2004) initially compared the CPM and CPC using the optimal contract design framework, while Asdemir et al. (Asdemir et al., 2012) analyzed the same models under the principal-agent framework. Both studies argued that the performance-based CPC model can incentivize publishers and advertisers to enhance advertising performance. Agarwal et al. (Agarwal et al., 2009) differentiated CPA from CPC. Hu et al. <|MaskedSetence|> <|MaskedSetence|> (Zhu and Wilbur, 2011) and Liu et al. (Liu and Viswanathan, 2014) investigated hybrid auctions that allow advertisers to select their preferred pricing models. However, existing literature on pricing models exhibits several gaps. Firstly, to the best of our knowledge, no existing study has examined OCPC (or OCPM) as a pricing model. <|MaskedSetence|>
**A**: Although some papers (Zhu et al., 2017; Jain and Khan, 2021; Tang et al., 2020) have explored OCPC’s algorithm system architecture, it is still unclear under which conditions OCPC should be employed in comparison to other pricing models.. **B**: Zhu et al. **C**: (Hu et al., 2016) delved deeper into the trade-offs between CPA and CPC, focusing on incentive issues.
CBA
ABC
CBA
CBA
Selection 1
<|MaskedSetence|> The normalized version of HHI is even more widespread (Owen et al.,, 2007). <|MaskedSetence|> It solves the problem of limited cardinality of most indices and makes the interpretation of the differences between the levels of competitive balance possible. Avila-Cano et al., (2021) determine its maximum concentration as the function of the number of teams and the scoring system. <|MaskedSetence|>
**A**: Triguero Ruiz and Avila-Cano, (2019) construct a novel index based on the concept of distance, with a range in the unit interval, which is the square root of the normalized HHI. **B**: Analogously, Triguero-Ruiz et al., (2023) compute the minimal HHI in sports leagues without ties. Naturally, some alternative measures of long-term competitive balance have also been suggested. Manasis et al., (2013) introduce a new measure to capture the degree of competition for winning the important prizes awarded in the European football leagues such as the championship, the qualification for lucrative international club competitions, and avoiding relegation.. **C**: The other popular concept to measure long-term competitive balance is the Herfindahl–Hirschman index from the literature of industrial organization (Herfindahl,, 1950; Hirschman,, 1945, 1964), firstly used in sport by Depken, (1999).
CAB
CAB
CBA
CAB
Selection 4
There has been plenty of literature dealing with observable status quo affecting the choices of a DM. <|MaskedSetence|> These authors were the first to deal with observable reference alternatives’ impact on attention in a stochastic choice setting. It is natural to presume that a reference alternative can direct attention when the underlying heuristic is applied due to limited attention, for which they provide a characterization of a reference-dependent RAM, as well as special cases where the attention function is further restricted. In certain cases, it may make sense that the reference is observable. For instance, for a decision maker in the market for a new car, his current car can be a reference. However, in many situations, it is not quite realistic to assume the knowledge of the DM’s reference to anyone else but the DM himself. Secondly, to analyse a choice problem with observable references, we have to deal with a family of stochastic choice functions, one for each alternative as the reference, which is quite a large data requirement. <|MaskedSetence|> We do not have to deal with families of stochastic choice functions for each possible reference. <|MaskedSetence|>
**A**: In particular, Kovach and Suleymanov (2023) uses the framework of RAM to look at how a reference alternative, or a status quo in their case, can affect choices if it directs attention. **B**: If we restrict the references to be unknown we deal with only menus and stochastic choice functions over alternatives and menus. **C**: .
ABC
ABC
ABC
BAC
Selection 2
There are however important issues with equilibrium models. <|MaskedSetence|> <|MaskedSetence|> Second, even if equilibrium is assumed, the choice of a Cobb-Douglas production function is extremely special, as it allows a feasible equilibrium (where all firms produce a positive amount of goods and sell them at positive prices) to exist for arbitrary input-output networks and any value of firm productivities. This is a result of the relatively high amount of substitutability of input goods implied by Cobb-Douglas. As soon as elasticity of substitution σ𝜎\sigmaitalic_σ is lower, some firms may have to disappear for a feasible equilibrium to exist – much as what happens in the context of the Lotka-Volterra description of ecological communities discussed in section 3.3. <|MaskedSetence|>
**A**: First and foremost, it sounds more plausible to think that the large aggregate disruptions we are trying to decipher are disequilibrium effects. **B**: Indeed, the assumption that during such periods all markets instantaneously clear (no stock-outs, no inventories) and firms muddle through without defaulting is clearly untenable. **C**: Hence, bankruptcies and bankruptcy waves must be considered within such generalized models, which means that a fully out-of-equilibrium, dynamical description of shock propagation is necessary to describe the resulting aggregate fluctuations. .
ABC
CBA
ABC
ABC
Selection 1
<|MaskedSetence|> Column (1) reports the prevalence rates measured based on our text analysis for each of these categories in the sample of job ads from 2015–2019. Column (2) reports the corresponding rates for the random sample of 400 job ads, also based on our text analysis procedure, while Column (3) reports the prevalence rates from manual recognition for this random sample. Finally, Columns (4)-(6) provide rates of success, precision, and sensitivity by comparing prevalence rates for each category across text analysis and manual recognition. Across the ten broad categories of job attributes, we find that our procedure performs well in terms of success, precision, and sensitivity. <|MaskedSetence|> The notable exceptions are “task-related attributes”, where we find a success rate of 72 percent and precision of 64.2 percent, and “minor perks”, where we find a success rate of 66.5 percent and precision of 33.7 percent. The former is driven by differences in detection rates for whether the job “involves leadership responsibilities” and “work involves travelling,” while the latter is driven by “central location,” which is more frequently detected in our text analysis.212121See Appendix Table A.3, which shows the results of the same validation exercise for each of the 47 underlying the ten broad categories in Table 3. Overall, high levels of precision indicate that the attributes recovered by text analysis reflect the actual content of the attributes well. <|MaskedSetence|>
**A**: Sensitivity is always equal to or larger than precision, above 80 percent for all attributes. . **B**: Success and precision rates are above or around 80 percent for most categories. **C**: Table 3 shows the results of this comparison for ten broad categories of job attributes.
CBA
CBA
CBA
CBA
Selection 3
<|MaskedSetence|> This estimator uses the confusion matrix between land cover ground truth and map product points to correct for misclassification bias in the map product [card1982using, stehman2013estimating, olofsson2013making]. <|MaskedSetence|> One contribution of the present work is a comparison of the post-stratified area estimator and PPI. <|MaskedSetence|> For other methods, [gallego2004remote] provides a survey of area estimators that combine ground truth and map product data, including regression, ratio, and calibration estimators. .
**A**: This method only applies to area estimation, not regression coefficient estimation tasks. **B**: Related work. The remote sensing community commonly uses a post-stratified estimator to generate confidence intervals for area estimates. **C**: We find both experimentally and mathematically that these two methods produce similar outputs for area estimation.
BAC
BAC
BAC
BAC
Selection 4
LUKE is restricted to the top 50K Wikipedia entries, due to computational constraints in calculating the softmax. <|MaskedSetence|> <|MaskedSetence|> Arora et al. (2024) also show that an embedding model achieves higher accuracy disambiguating historical texts. <|MaskedSetence|>
**A**: In short, again a sequence-to-sequence architecture is costly to run at scale and doesn’t necessarily offer performance advantages.. **B**: Many of the top-50K entries are not people, and many people that appear in historical news or government documents are not amongst the top-50K. Additionally, it does not accommodate out-of-knowledge base entities and requires sparse entity priors to initialize the model. **C**: In many applications, not all individuals will be in a knowledge base, and the model needs to be able to predict this. The neural translation model’s sequence-to-sequence architecture is slow during inference, taking approximately 60 times longer to run than bi-encoder embedding models.
ACB
BCA
BCA
BCA
Selection 2
The increase in the share of low-value transactions in the total number of transactions has important implications for international trade. First, a growing share of imports enter the US and the EU with a value equal to or below the de minimis threshold to avoid paying tariffs is argued. This concern is particularly strong in the US since tariffs on many Chinese goods were increased in 2018. Furthermore, there are claims that de minimis enables the import of goods that do not meet health and safety standards.222See “Where Textile Mills Thrived, Remnants Battle for Survival” by Jordyn Holman, The New York Times, 21 January 2024 at https://www.nytimes.com/2024/01/21/business/economy/textile-mills-carolina-trade-de-minimis.html and “The EU targets China’s Temu and Shein with proposed import duty” by Andy Bounds Paola Tamma, Financial Times, 3 July 2024 at https://www.ft.com/content/1c4c0bee-f67e-404b-877d-e0cb38faf2d6. <|MaskedSetence|> Therefore, the increase in the share of low-value transactions could have important implications for trade policy. Second, the increase in the share of low-value transactions has been facilitated by the emergence of e-commerce platforms that allow consumers to choose from a wide range of products. <|MaskedSetence|> <|MaskedSetence|> For example, Volpe-Martincus et al., (2024) show that low-value transactions are not affected by trade agreements and this paper shows that the time-sensitiveness of low-value transactions leads to a greater use of air transport..
**A**: The alleged misuse of the de minimis threshold has led to proposals to eliminate it both in the United States and in the EU. **B**: These platforms allow consumers to be directly involved in international trade, and, therefore, their purchasing decisions have a greater weight in shaping trade flows (Volpe-Martincus et al.,, 2024). **C**: If there are differences in the variables that govern consumer purchasing decisions from those of firms, it will be important to take those differences into account when designing economic policies.
ABC
ABC
ABC
ABC
Selection 1
<|MaskedSetence|> (2014) extending the earlier analysis by Jackman and Roper (1987). In the static matching model with multiple sectors, Jackman and Roper (1987) demonstrate that aggregate hires are maximized when unemployment is distributed across sectors in a way that equalizes sectoral labor-market tightness. <|MaskedSetence|> (2014) extend this framework by incorporating a dynamic, stochastic environment with multiple sources of heterogeneity, providing a methodology to construct counterfactual measures of unemployment in the absence of mismatch. Their focus is on mismatch unemployment, defined as unemployed workers searching in the “wrong” sector.222Note that a distinct body of literature, such as Eeckhout and Kircher (2011), uses the term “mismatch” to describe employed individuals in suboptimal jobs, where the alignment between worker skills and firm capital is not ideal. <|MaskedSetence|> In addition, Guvenen et al. (2020) explores skill mismatch, which quantifies the gap between the skills required by an occupation and the abilities a worker possesses. Another approach, as in Shimer (2012), takes a flow perspective by decomposing unemployment volatility into transition rates between four labor force states: (i) permanent-contract employment, (ii) temporary-contract employment, (iii) unemployment, and (iv) non-participation, providing a foundation for calculating mismatch measures based on employment status..
**A**: Eeckhout (2018) provides an overview of this literature, identifying three primary causes of mismatch: search frictions (Eeckhout and Kircher 2011), stochastic sorting (Chade and Eeckhout 2016), and multidimensional types (Lindenlaub 2017). **B**: Şahin et al. **C**: Second, this paper contributes to the literature on labor market mismatch, building on the foundational work of Şahin et al.
CBA
CBA
CBA
CBA
Selection 2
<|MaskedSetence|> <|MaskedSetence|> We discuss identification under two sets of weaker assumptions: random assignment and exclusion restriction (without monotonicity), and random assignment and monotonicity (without exclusion restriction). We first introduce two causal parameters: the local average treatment-controlled direct effect (LATCDE), and the local average instrument-controlled direct effect (LAICDE). <|MaskedSetence|> Second, we derive sharp bounds for the twenty parameters under respectively (i) random assignment, exclusion restriction, and monotonicity; (ii) random assignment, and exclusion restriction; and (iii) random assignment, and monotonicity. Specifically, under the random assignment and monotonicity assumptions, we derive sharp bounds on the local average treatment-controlled direct effects for the always-takers and never-takers, respectively, and the total average controlled direct effect for the compliers. Additionally, we show that the intent-to-treat effect can be expressed as a convex weighted average of these three effects. Third, we propose a misspecification robust bound for the vector parameter of interest under the three possible sets of assumptions. Finally, we apply our method on the proximity to college instrument and find that growing up near a four-year college increases the wage of never-takers (who represent more than 70% of the population) by a range of 4.15% to 27.07%. .
**A**: We define a twenty-dimensional real-valued vector parameter of interest that contains the eight LATCDEs, the eight LAICDEs, and the four types’ probabilities. **B**: We consider a randomized experiment setting where the random assignment assumption is maintained. **C**: The current paper aims at developing a unifying approach that would provide the maximal combination of the three assumptions that are compatible with the data.
CBA
CBA
CBA
CBA
Selection 3
<|MaskedSetence|> Imbens and Rubin (2015) state that, at some level, all methods for causal inference can be viewed as missing data imputation methods, although some are more explicit than others. For instance, under certain assumptions, the matching method (Abadie and Imbens (2006, 2011)) explicitly imputes the missing counterfactual for treated units with meticulously selected controls. The DID method (Card and Krueger (1993); Ashenfelter (1978)), on the other hand, implicitly imputes the missing counterfactual by differencing the control and treated units before and after treatment. Meanwhile, the SCM method explicitly imputes the missing counterfactual with a weighted average of control units. Our method aligns with the recent trend in the causal inference literature, aiming to explicitly impute the missing counterfactuals by modeling the entire DGPs, a strategy highlighted by Athey et al. <|MaskedSetence|> <|MaskedSetence|> Additionally, it addresses the limitations of the original synthetic control method (SCM) developed by Abadie and Imbens (2006) and Abadie et al. (2010), as well as its variants advanced by Ben-Michael et al. (2021) and Arkhangelsky et al. (2021), which require the outcomes of treated units to lie within or near the convex hull formed by the control units..
**A**: Causal inference in economics and other social sciences is frequently complicated by the absence of counterfactuals, which are essential for evaluating the impact of a treatment or policy intervention. **B**: (2021) with their matrix completion (MC) method, and Xu (2017) with their CSC-IFE method. As a newly developed branch of causal inference, modeling entire DGPs offers distinct advantages. **C**: This approach helps to overcome the constraints imposed by untestable and stringent assumptions, such as unconfoundedness and common support in matching methods (Rosenbaum and Rubin (1983); Rubin (1997)), as well as the parallel trends assumption in difference-in-differences (DID) models (Card and Krueger (1993)).
ABC
ABC
CBA
ABC
Selection 2
The body of work studying decision-making under uncertainty is sizeable. The work closest to this one is Pease and Whitmeyer (2023). There, we formulate a binary relation between actions: action a𝑎aitalic_a is safer than b𝑏bitalic_b if the the set of beliefs at which a𝑎aitalic_a is preferred to b𝑏bitalic_b grows larger, in a set inclusion sense, when we make Alice more risk averse. Rothschild and Stiglitz (1970) is a seminal work that characterizes (mean-preserving) transformations of lotteries that are preferred by all risk-averse agents. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: Crucially, these indices and measures correspond to inherently stochastic objects–the lotteries at hand. **B**: Our conception of an improvement to an action concerns comparisons of state-dependent payoffs, which are themselves non-random objects (they are just real numbers). . **C**: Aumann and Serrano (2008) formulate an “measure of riskiness” of gambles, as do Foster and Hart (2009) (who are subsequently followed up upon by Bali, Cakici, and Chabi-Yo (2011) and Riedel and Hellmann (2015)).
CAB
CAB
CAB
ABC
Selection 2
The weight put on the monopolist’s payoff by the regulator captures the intensity of the regulator’s distributional preferences. A larger weight corresponds to a weaker distributional preference (or stronger preference for efficiency). We show that the set of undominated mechanisms expands as the regulator’s distributional preferences weaken. However, our necessary and sufficient conditions are invariant in this preference parameter. <|MaskedSetence|> One that satisfies our sufficient conditions and is unaffected by the regulator’s distributional preferences, and the other does not satisfy our sufficient conditions but expands as the regulator puts more weight on the monopolist’s profit. Surprisingly, the efficient mechanism, where the monopolist always operates and produces efficient quantity (maximizing total surplus), belongs to neither of these subsets as it is dominated. Our analysis has two critical implications for regulation design. <|MaskedSetence|> A max-min optimal mechanism has the highest worst-case payoff. <|MaskedSetence|>
**A**: We establish that if the regulator’s set of priors contains a prior that (first-order) stochastically dominates every other prior in the set, then the max-min optimal mechanism is the optimal mechanism for that prior.. **B**: This highlights that the set of undominated mechanisms can be divided into two subsets. **C**: The first is for an ambiguity-averse regulator who evaluates any mechanism by the worst-case expected payoff it generates over a set of priors.
BCA
BCA
CAB
BCA
Selection 1
1.1 Related Literature Sequential experiments and analysis have been a classical problem in statistical literature, to facilitate early stopping. Early examples include the celebrated Wald’s sequential probability ratio test (Wald (1947), Chapter 3) and the generalized likelihood ratio approach of Robbins and Siegmund (1974). <|MaskedSetence|> This is not merely of theoretical interest — if not accounted for correctly, continuous monitoring can lead to substantial inflation of Type-I error (Armitage et al. (1969); Berman et al. (2018)). <|MaskedSetence|> Recent research progress has often been made in industry, examples including risk mitigation in online experimentation (Ham et al. (2022)), rapid software deployment (Lindon et al. (2022)) strengthening canary testing (Schermann et al. (2018)), multinomial count data modeling for increasing conversion rate (Lindon and Malek (2022)) from Netflix, flexible monitoring and sample size calculations at Adobe (Maharaj et al. (2023)), or usage of sequential testing by Evidently at Amazon Web Services (Evidently (2024)). Turner and Grünwald (2023) consider the problem of anytime-valid inference in contingency tables, with potential application in labor induction studies (Turner et al. (2024)). <|MaskedSetence|>
**A**: Such methods have been seriously appreciated not merely due to their mathematical elegance, but their practical utility. **B**: One of the key domains to apply anytime-valid inference is in continuous monitoring of A/B tests (Johari et al., 2022). . **C**: Robbins in particular shifted the focus from Wald’s designs of a single stopping rule to instead allowing for continuous monitoring and stopping at any time, possibly not specified in advance.
CAB
CAB
CAB
BAC
Selection 1
<|MaskedSetence|> This information is public knowledge for all player. <|MaskedSetence|> Among citizens, those without special abilities are known as villagers, while those with special abilities are assigned specific roles. Some variations of the game also feature special powers for werewolves. <|MaskedSetence|> A simple set of identities in a game might include 2 villagers, 1 prophet, and 2 werewolves, totaling five players. .
**A**: Typically, players are divided into two opposing groups: citizens and werewolves. **B**: For example, a citizen who can check another player’s group (citizen or werewolf) once per night is called a prophet. **C**: 2.1 Determine the Set of Identities Before the game begins, it is essential to determine the number of players and their respective identities.
CAB
CAB
CAB
ABC
Selection 3
<|MaskedSetence|> However, it is actually confirmed that when the RV is calculated using the ultra high frequency data, the RV value skyrocket. <|MaskedSetence|> The theoretical correctness of RV is assumed that there is no measurement error in the data, but the actual data includes observation error called market microstructure noise (MN), whereby RV includes bias. (Zhou[1996]) An improved estimator of RV that converges to IV under the existence of MN assumed to be independent noise. <|MaskedSetence|> Under this “dependent” MN, bias remains in the estimator of Zhou[1996]. This was solved by the Realized Kernel (RK) of Barndorff-Nielsen et al[2011], and thereafter, study to improve this convergence rate and empirical research using RK are continuing. For the survey of the improved estimators of RV, see Mukherjee et al[2019] for example. Barndorff-Nielsen and Shephard[2002] proposed a model approach of IV estimation with RV as observation. In other words, by formulating the RV as a state space time series model representing the sum of IV and observation error, it is possible to estimate and predict IV based on the model from RV as data by Kalman filter. Meddahi[2003] generalizes this model and assumes a general class continuous time SV model (Square root stochastic autoregressive volatility model) for instantaneous volatility and derives model of IV and clarifies the relationship between SV model and IV parameters. .
**A**: The reason why this phenomenon occurs is explained as follows. **B**: Since RV is based on the quadratic variation, in order to make estimation that is closer to IV, we have to increase the frequency data used for RV calculation. **C**: However, empirical studies of Hansen and Lunde[2006] have reported that MN is not independent: auto-correlate and correlate with return.
BAC
BCA
BAC
BAC
Selection 3
Our research seeks to identify patterns indicative of financial instabilities that may trigger regime shifts. <|MaskedSetence|> By examining how these correlations change over time, the project seeks to better understand the dynamics of the market and how they relate to the activities of both short- and long-term investors. This approach recognizes that market volatilities at different time resolutions reflect the influences of traders operating on varying timescales, thus providing insights into overall market behavior [Muller:1997jef]. The main modelling approach is based on tracking observations using a rolling-window technique; this includes examining the interrelationships among different levels of observation—from pairwise correlations to system-wide connectedness—in relation to both short- and long-term timescales. We hypothesize that using a multiscale approach across observation levels can provide critical insights into the stability of the complex dynamical system under examination. In an unstable market, investors with very short horizons are assumed to be the most active, and some long-term investors may adopt shorter investment horizons [kristoufek2013]. <|MaskedSetence|> The existence and activity of investors with a wide range of investment horizons are essential; they contribute to the smooth and stable functioning of the market. <|MaskedSetence|>
**A**: Herein, we investigate whether, during periods of market instability or in the preceding periods, short-term cross-correlation coefficients deviate significantly from long-term cross-correlation coefficients in comparison to those during stable periods [Scheffer:2009nature].. **B**: The tool developed in this work analyzes time-varying nonlinear cross-correlations among market indices spanning various timescales. **C**: At critical points, long-term investors will restrict or even stop their trading activities, and short-term investors will become dominant.
CBA
BCA
BCA
BCA
Selection 3
2 Results At the heart of the model is a commitment problem.888While a commitment problem is also at the heart of the Acemoglu and Robinson (2000) framework, the way it is solved there is different. There, extending the franchise involves the rich (elite) ceding decision-making power to the poor (disenfranchised). This allows the rich to commit ex-ante to a policy they would not choose ex-post. Conditional on the size of the pie – which depends critically on investment decisions – the elite wants to take as much of it as possible. <|MaskedSetence|> But the disenfranchised know this, and so do not invest. Investment in the economy is low and the pie is small. A promise to provide public goods would not be credible – in my model, the elite holds all of the power and can ultimately do whatever it wants. <|MaskedSetence|> This is not a standard commitment device: the elite can never tie their hands in this model. <|MaskedSetence|>
**A**: Rather, they can change the incentives they face, and so change their preferred action.. **B**: So to unlock investment by the disenfranchised, the elite must change what it wants to do. **C**: This means they will expropriate.
ABC
CBA
CBA
CBA
Selection 3
This echoes a theme in the broader literature on robust mechanism design (Carroll, 2015, 2017) which finds that robust mechanisms mirror those used in practice and are simple, intuitive, and computationally tractable. Of course, the regulator might in practice have some beliefs about the agent’s learning process and, if we are prepared to assume so, why then has she not solved the infinite-dimensional problem of implementing the path of transfers which, via the agent’s optimal stopping behavior, maximizes the regulator’s expected payoff?242424Observe that each dynamic mechanism and learning process induces a different optimal stopping problem for which no general solution is known. <|MaskedSetence|> An interpretation is that this is computationally intractable so Theorem 1 represents the best a boundedly rational agent can do.252525It will be helpful to compare our analysis to that of robust multidimensional screening where the seller knows the marginal but not joint distributions of valuations for each product (Carroll, 2017). <|MaskedSetence|> <|MaskedSetence|>
**A**: Indeed, analytic solutions to only a small class of stopping problems are known. **B**: As in our setting, Bayesian optimal multidimensional screening is well-known to be hard (see Rochet (2024) for a recent survey). **C**: Carroll (2017) shows that the simple mechanism of selling each item separately delivers the highest worst-case guarantee when nature adversarially chooses the joint distribution to minimize the seller’s revenue and also entertains a behavioral interpretation. An alternate—and, to the expected utility purists, more appealing—interpretation is that mechanisms of the form in Theorem 1 can also perform well in the expected utility sense for a class of reasonable priors over learning processes.262626See, for instance, Chung and.
ABC
ABC
ABC
BAC
Selection 3