text_with_holes
stringlengths 166
4.13k
| text_candidates
stringlengths 105
1.58k
| A
stringclasses 6
values | B
stringclasses 6
values | C
stringclasses 6
values | D
stringclasses 6
values | label
stringclasses 4
values |
---|---|---|---|---|---|---|
This paper addresses precisely this scenario, where the probability that a set of observations is a Sybil network is neither zero nor one; and so neither full exclusion nor full inclusion is the optimal solution. I frame the question as a weighted regression problem, and show that the optimal weight matrix – to minimize the mean-squared error of the estimator – is simply the inverse of the expectation of the network topology across different permutations. This solution nicely handles many practical cases, such as the possibility of several Sybil networks either being distinct networks or being one large network. <|MaskedSetence|> (yangwilson), or through systematic approaches, e.g. <|MaskedSetence|> <|MaskedSetence|> At that point, the methodology in this paper can be applied to use that information efficiently, downweighting suspected observations by the probability and potential size of the network.
.
|
**A**: For the most common setup – multiple distinct potential Sybil networks whose observations have homoskedastic errors – I also offer a direct solution that does not require matrix inversion, so that this methodology can be deployed successfully on large populations without hitting computational limits.
This is a natural extension from the existing literature, which focuses heavily on identifying Sybil networks either through ad-hoc analyses, e.g.
**B**: (danezis2009sybilinfer) or (6787042).
**C**: These different approaches can help experimenters find potential networks and assess their likelihoods of being interlinked or independent observations.
|
ABC
|
CBA
|
ABC
|
ABC
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We normalized the results so that the five groups sum up to 1. Feature groups are defined in Table 13 in Appendix A. Time period 2006Q1-2016Q2. Source: Authors’ calculations based on Experian data.
.
|
**A**: •
Notes: SHAP values for five feature groups.
**B**: We then calculate the sum of the absolute value for each feature, aggregate it across the feature groups, and report the results for the group.
**C**: For each prediction window, we compute the SHAP value for 100,000 randomly sampled observations and for each feature.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> Athey et al. <|MaskedSetence|> (2022); Dimakopoulou et al. (2017); Kuang and Wager (2023), to list but a few relevant papers..
|
**A**:
Recent research in policy learning extends to and intersects with the machine learning literature on bandit algorithms (see Lattimore and Szepesvári, 2020, for a monograph on bandit algorithms).
The welfare-performance criterion that we consider concerns the cumulative welfare for sequentially arriving units rather than for the super-population.
**B**: (2022) and Qin and Russo (2024) assess the trade-off between the in-sample welfare and the super-population welfare of best-arm identification and study how to balance them out.
Recent advances in bandit algorithms in the econometrics literature include those studied in Adusumilli (2021); Kock et al.
**C**: The latter corresponds to the welfare objective in the best-arm identification problems as studied in (Russo and Roy, 2016; Kasy and Sautmann, 2021; Ariu et al., 2021), among others.
|
ACB
|
ACB
|
CBA
|
ACB
|
Selection 2
|
In this paper, we argue that even though this criticism is extremely valid, it may have gone too far in the sense that the monotonicity demands placed on stochastic choice models by this literature may have been too ambitious. <|MaskedSetence|> From Rothschild and Stiglitz (1970), we know that all risk-averse agents prefer the less risky option in any mean-preserving spread (MPS). Thus, in a well-behaved stochastic choice model the probability of choosing the less risky option in an MPS should increase in the degree of absolute risk aversion. This argument relies on two implicit assumptions: First, it assumes that Pratt (1964)’s results about choice between a safe and a risky option extend in the right way to choice between two risky options. <|MaskedSetence|> In a similar way, Apesteguia and Ballester (2018) postulate that choice probabilities should be increasing in the degree of risk aversion for any pair of lotteries for which all agents above a given threshold of risk aversion pick the safer option while all agents below the threshold pick the riskier one. <|MaskedSetence|> Again, this implicit assumption relies on the intuition that the strength of the preference for the safer option should be increasing in the degree of risk aversion.
.
|
**A**: For illustration, consider how Wilcox (2011) motivates his notion of stochastic monotonicity: From Pratt (1964), we know that an agent with a larger coefficient of absolute risk aversion is more risk averse.
**B**: Second, it assumes that the strength of the preference for the safer of the two options increases with the degree of risk aversion.
**C**: Effectively, this assumes that the problem is so good-natured that a single-crossing property at the level of preferences suffices to guarantee a monotonicity property at the level of choice probabilities.
|
ABC
|
ABC
|
ACB
|
ABC
|
Selection 2
|
According to Kodinariya et al., [41], K-means clustering requires the following steps. First, we need to put K points in our space, representing as initial group centroids, as far away from each other as possible. Second, assign each point to a given dataset and associate it with the nearest centroid. <|MaskedSetence|> Next, repeat steps 2 and 3 until the centroids stop moving.
In our research, we choose the Silhouette method to find the optimal number of clusters in K-means clustering. Silhouette Score, a method that computes the average of the Silhouette coefficient of all samples from different numbers of clusters, is the measurement of the clustering quality that ranges from -1 to 1, [42]. Typically, when the average is close to 1, it means that all the points are in the correct cluster [42]. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: The formula of the Silhouette coefficient can be represented as the following.
.
**B**: Third, after finishing assigning points, recalculate the new K centroids in the middle of the clusters.
**C**: Therefore, we should find the number of clusters that yield the highest Silhouette score.
|
BCA
|
BCA
|
BCA
|
CBA
|
Selection 2
|
We then propose mechanisms underlying algorithmic collusion, to accommodate the outcomes in real simulations. As is suggested by common wisdom, reinforcement learning algorithms adapt to NE based on sequential and alternating price undercut. However, this price undercut process will be constantly interrupted since Q-values of low prices are inflated for some reason. Hence, even if winning agent grab all the market share by undercutting price, the generated profit is too low to reinforce its choice of undercut price in the future. Then, both the winning and losing agent have to bilaterally rebound to some higher prices. The price undercut process will then be iterated, making the price curve like an edge-worth cycle. <|MaskedSetence|> Therefore, a strictly positive gap emerges and can sustain unilateral deviations within a given periods, as the exploration rate decreases to a low enough scale. Finally, we argue that the discount factor plays a critical role in the inflation of Q-values. <|MaskedSetence|> However, when Q-learning agents update their Q-values, the strictly positive discount factor always injects the maximal Q-value, serving as the estimation for the discounted future payoff, and the bubble of Q-values therefore emerges. In early episodes, actions are mainly chosen by exploration and then discount factor inflates Q-values when Q-learning agents update Q-values of choosing actions. One way to solve the inflation problem is to impose a lower bound on the prices each firm can quote. <|MaskedSetence|> One policy recommendation for regulators is that allowing for low algorithmic collusion can exclude high collusion.
We finally accommodate our mechanism to simulation outcomes in existing literature. Asker et al. (2022) and Asker et al. (2023) replaces the exogenous exploration by initializing Q-values at relatively high level, named as “optimistic initialization”. The collusive outcome observed in the simulation is then driven by artificial inflation of Q-values during initialization. Banchio and Skrzypacz (2022) draws a comparison between First and Second Price Auction and finds that Q-learning agents adapt to NE among all sessions in the Second Price Auction (SPA). Note that in SPA, the profit for the winner only depends on the bid quoted by the loser. Therefore, a unilateral rebound, instead of bilateral rebound, will follow the interruption of price undercut process. Both learning agents will be trapped in alternatively choosing the highest bid, until both choosing the highest bid can be stable. Asker et al. (2022) also proposes that synchronous updating can completely eliminate algorithmic collusion and partially synchronous updating can alleviate collusion. The synchronous updating, through constantly updating Q-values for low prices even if not chosen, can squeeze the bubble of their Q-values and therefore the price undercut process will be continued without interruption.
.
|
**A**: When quoting the minimal price and winning the market game, the generated profit is still high enough to reinforce its choice of the minimal price in the future, facing the inflated Q-values.
**B**: When these Q-learning agents bilaterally rebound to the same relatively high price level coincidentally, the Q-value of this high price increases while the Q-values of the rest prices stay the same.
**C**: Note that Q-values are initialized in a plausible range such that each price are reinforced as long as it wins the market game.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 4
|
<|MaskedSetence|> This kind of sample splitting is important to remove overfitting bias that can arise if we use the same observations for training the ML models and estimating the effects (Chernozhukov et al.,, 2018). <|MaskedSetence|> In general, sample splitting leads to less efficient estimation, since we use only part of the data for training and estimation, respectively. Nevertheless, DML regains full efficiency by switching the roles of the training and estimation sample and averaging the resulting estimates (“cross-fitting”) (Chernozhukov et al.,, 2018).
However, cross-fitting relies on the assumption that the observations are independent and identically distributed (i.i.d.) (Chernozhukov et al.,, 2018). <|MaskedSetence|> As a consequence, there is a danger of overfitting the ML models to the hold-out data. While this certainly makes consistency statements, asymptotic analysis, and valid confidence intervals difficult (e.g., Wooldridge,, 2010), it is unclear how severe the practical consequences for the estimated effect coefficients really are. In our study, we will assess how different splitting strategies affect the finite-sample performance of different DML estimators. We leave the analysis of asymptotic properties and the construction of valid confidence intervals to future research..
|
**A**: To avoid this overfitting in DML, we train the ML methods on a part of the data, but make predictions and estimate the effects on another part that we did not use for training.
**B**: As soon as we enter settings with panel data, this assumption is violated, because data points are dependent over time (i.e., serial correlation/autocorrelation) and/or within a cluster, and can end up in the same or in different samples when randomly splitting the data (Chiang et al.,, 2022).
**C**:
One essential component of the original DML framework (Chernozhukov et al.,, 2018) is the cross-fitting procedure.
|
CAB
|
CAB
|
CAB
|
BAC
|
Selection 2
|
In their seminal paper on tax salience, Chetty et al. (2009) note that the tax incidence on consumers is lower when “producers must actively “shroud” a tax levied on them in order to reduce its salience”. Yet, theoretical and empirical evidence on this channel is scarce. <|MaskedSetence|> Ellison, 2005; Gabaix and Laibson, 2006; Hossain and Morgan, 2006; Carlin, 2009; Ellison and Ellison, 2009; Brown et al., 2010; Ellison and Wolitzky, 2012; Piccione and Spiegler, 2012; Kosfeld and Schüwer, 2017; Heidhues et al., 2016).
Second, there is a growing literature in public finance, industrial organization and marketing that examines the pass-through and the effects of taxes on various traditional (physical) sin goods, such as sugar-sweetened beverages (Allcott et al., 2019a, b; Seiler et al., 2021; Keller et al., 2023), alcohol (Kenkel, 2005; Hindriks and Serse, 2019), cigarettes (Barnett et al., 1995; Gruber and Kőszegi, 2004; Harding et al., 2012), and marijuana (Hollenbeck and Uetake, 2021). Some of these studies show full pass-through of these taxes, or even over-shifting—one possible outcome in models with imperfect competition (Anderson et al., 2001; Weyl and Fabinger, 2013).121212There is also a renewed interest among industrial economists in studying tax pass-through to learn more about consumer preferences, market power, and competition in different markets (e.g. Miravete et al., 2018; Pless and van Benthem, 2019; Miravete et al., 2020; Hollenbeck and Uetake, 2021).
My findings align with DeCicca et al. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: My study closes this gap in the context of a digital sin tax, bridging the behavioral public finance literature to a rich literature in industrial organization that examines the effect of shrouded attributes and strategic price obfuscation in different market settings (e.g.
**B**: I contribute to this literature by showing that firms can affect “experienced” and “perceived” pass-through rates and the corrective effect of sin taxes by strategic tax shrouding..
**C**: (2013), who find that consumer price search affects the pass-through of cigarette excise taxes.
|
ACB
|
BCA
|
ACB
|
ACB
|
Selection 1
|
<|MaskedSetence|> Firstly, we demonstrate that East German inventors who had access to international technological knowledge via industrial espionage activities organized by the government were more likely to pursue inter-organizational mobility and continue their inventive activities in reunified Germany after 1990 (H1). The case of East Germany allows us to quantify the mobility effect of knowledge inflow in a quasi-experimental setting that mitigates endogeneity and selection effects. We extend previous studies that have focused on individual, firm-specific, or contextual traits as antecedents of mobility decisions (Agarwal et al.,, 2016; Starr et al.,, 2018; Melero et al.,, 2020; Bhaskarabhatla et al.,, 2021; Seo and Somaya,, 2022) by highlighting the role of informal institutions governing an inventor’s access to frontier knowledge. While the type of knowledge that inventors acquire throughout their careers can be regarded as a critical input for inter-organizational mobility (Ganco,, 2013; Campbell et al.,, 2012; Palomeras and Melero,, 2010), a different stream of literature emphasizes the relative importance of innate abilities and peer effects (Nicolaou et al.,, 2008; Bell et al.,, 2019). We are able to inform this debate by showing that the informal institutions facilitating access to frontier knowledge have a significant influence on the mobility patterns of inventors. <|MaskedSetence|> <|MaskedSetence|> The communist ideology of the East German government affected individual preferences towards social policy and behavior (Alesina and Fuchs-Schündeln,, 2007; Brosig-Koch et al.,, 2011), and resulted in a high degree of embeddedness in the local community (Boenisch and Schneider,, 2013). These strong network ties restricted knowledge transfer from other communities and created lock-in effects, thereby limiting future inter-organizational mobility. We find that this community effect is sizeable (45% lower likelihood to continue producing inventions, relative to the mean, following a one-standard-deviation increase in community support for the former socialist regime) and quantitatively dominates the mechanism investigated in the previous hypothesis.
.
|
**A**: Our research contributes to the literature on inventor mobility and knowledge sourcing for organizational search in several ways.
**B**: Our findings show that political imprints and community attitudes can be key determinants of inventor mobility, which expands upon previous research that has investigated directed knowledge exchange via scientific collaborations (Campbell et al.,, 2021), social ties between regions (Hoisl et al.,, 2016), or the access to collaboration networks abroad (Bisset et al.,, 2024).
**C**: However, we also stress that the quantitative effect of an increased inflow of specific technological knowledge is relatively small (9% higher likelihood to continue producing inventions, relative to the mean, following a one-standard-deviation increase in industrial espionage) compared to the mechanism we uncover in hypothesis 2.
Secondly, we provide evidence that East German inventors living in communities with stronger support for the socialist regime were less likely to continue their inventive activities in reunified Germany after 1990 (H2).
|
CAB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
To explain the optimal mechanism in more detail, we discuss the incentives of the agents to be truthful.
Because the merit-based rule sometimes allocates more than k𝑘kitalic_k objects, not all agents that get an object in the first stage can be checked. <|MaskedSetence|> The optimal mechanism provides incentives to be truthful by not always allocating all objects in the first stage; the lottery-based allocation of the remaining objects in the second stage ensures that even agents with low types have a chance to win and are not tempted to lie about their type: their expected probability of getting an object in the lottery equals the expected probability of getting an object in the first stage by lying. <|MaskedSetence|> In that case, she would have a strict incentive to claim a high type, hoping to get an object in the first stage without being checked. <|MaskedSetence|> This contrasts with models of costly verification, where optimal mechanisms can typically be implemented to satisfy these stronger incentive constraints.
Methodologically, we formulate the principal’s problem using “interim rules.” Since there is no point in checking an agent that won’t get an object, we obtain type-dependent feasibility constraints concerning which agents can be checked. This requires a characterization of which interim rules are feasible in the presence of type-dependent capacity constraints..
|
**A**: This suggests that agents might be tempted to claim that they have a high type.
**B**: However, if a low type knew that more than m𝑚mitalic_m agents have a type above the higher type, she would anticipate that all objects will be allocated in the first stage and she won’t get an object in the second stage by being truthful.
**C**: This shows that the optimal mechanism, while being Bayesian incentive compatible, does not satisfy stronger ex-post incentive constraints.
|
ABC
|
ABC
|
ABC
|
ACB
|
Selection 3
|
Results from the mis-classification simulations are graphed in the bottom panel of Figure 10. Clearly results are much more robust to mis-classification of adoption status. Mis-classifying up to five percent of observations has no meaningful impact on the outcomes. Mis-classifying 14%percent1414\%14 % of outcomes results in about half of all regressions producing significant results. Only if one mis-classified 37%percent3737\%37 % of observations does the share of significant p𝑝pitalic_p-values fall into the five to ten percent threshold, where their significance could be wholly ascribed to random chance.
The take-away from this exercise is that even results on the efficacy of STRVs using experimental data can be very sensitive to mis-measurement (though less so to mis-classification). <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> STRVs are effective during floods and we can measure that effect using a geospatial impact evaluation that relies on EO data. But the method requires extreme care and precision in accurately capturing the variables of interest. The acute need for precision is likely the result of trying to measure higher order treatment effects from a stochastic technology.
.
|
**A**: Perturbing the true yield data by adding to each value a random number drawn from a distribution 𝒩(0,31.75)𝒩031.75\mathcal{N}(0,31.75)caligraphic_N ( 0 , 31.75 ), or two percent of the standard deviation, produces non-significant results in nearly 80%percent8080\%80 % of regressions.
**B**: In terms of what these means for our failure to replicate, the Monte Carlo simulations add support for the optimistic interpretation.
**C**: Significance goes away adding even less noise to the flood data.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 1
|
Next, in Table 4 we examine the effects of the triple interaction of the lagged fractional index, lagged food insecurity, and contemporaneous government stringency score. <|MaskedSetence|> That said, the same pattern in signs emerges. <|MaskedSetence|> In Malawi and Nigeria, the opposite pattern holds, potentially suggesting that increased specialization leads to more food insecurity. <|MaskedSetence|> Based on the results in Tables 3 and 4, we conclude that livelihood diversification was not effective as an ex post coping strategy for improving food insecurity during the pandemic.
.
|
**A**: In this specification, we also observe that COVID stringency is significantly associated with food insecurity, suggesting a positive, albeit small relationship between the strictness of COVID policies and food insecurity, in Ethiopia and Malawi.
**B**: Three of four results are small and negative in Ethiopia, potentially suggesting that increased specialization leads to less food insecurity.
**C**: For all four measures of food insecurity and across all three countries, coefficients on the interaction term of interest are not significantly different from zero.
|
CBA
|
CBA
|
CAB
|
CBA
|
Selection 4
|
<|MaskedSetence|> In Fig. 8, we look at the customer-level CI scores for a representative customer action. We see that most of the customers have CI value close to the average HTT value. We see that there are few customers with a low CI value which shifts the mean to the left. One of the industry application of CI is marketing optimization. <|MaskedSetence|> <|MaskedSetence|> .
|
**A**:
So far, we have only looked at the mean of customer-level values in Fig 6.
**B**: Having access to distribution of customer-level CI scores as in Fig. 8 can help personalization and aid finer decision making.
Figure 8: Spread of customer-level CI values.
**C**: The red line is the mean of customer-level CI values.
|
CBA
|
ABC
|
ABC
|
ABC
|
Selection 2
|
The participants then completed five more tasks during the experiment. The tasks were all short, taking almost exactly ten minutes to complete each on average. The texts to be translated covered a range of materials including business, academic, and literary texts to simulate a wide variety of potential professional use cases. Participants overall reported that the tasks had an overall similarity score with those they completed professionally of 3.53 out of 5. A full description of the tasks involved can be found in Appendix B. <|MaskedSetence|> In return for completing the full survey all participants were paid $10. They could earn up to $12 more, however, in bonus payments depending on performance. <|MaskedSetence|> These grades consisted of the average of three professional translators who went through more stringent vetting (they had an average of 5+ years of experience each, and were monitored in their grading of sample tasks). <|MaskedSetence|> .
|
**A**:
Participants were given high-powered incentives to complete high-quality tasks.
**B**: These were paid out as $2 per task that they achieved a score of six points or higher out of seven.
**C**: The graders themselves were incentivized to grade carefully by offering significant bonus payments (up to doubling their payment) based on if their grades given were sufficiently close to the average grade given by other graders.
|
ACB
|
ABC
|
ABC
|
ABC
|
Selection 2
|
This paper is related to various strands of the literature.
First, the vast literature on approximate factor models, principal component analysis and spiked covariance models,
which includes Wang and Fan (2017), Donoho et al. <|MaskedSetence|> (2000, 2005), Bai and Ng (2006, 2019), Bai and Li (2012, 2016), Lam et al. (2011), Fan et al. <|MaskedSetence|> (2024).
In particular, this work is close to the important contribution of Fan et al. <|MaskedSetence|>
|
**A**: (2024)
that studies the properties of a large class of high-dimensional models, which includes factor models,.
**B**: (2018), Forni et al.
**C**: (2016), Gonçalves and Perron (2020), Barigozzi and Cho (2020), Su and Wang (2017) and Fan et al.
|
BCA
|
BCA
|
BCA
|
CAB
|
Selection 2
|
<|MaskedSetence|> 336). The reward-punishment scheme ensures that the supra-competitive outcomes may be obtained in equilibrium and do not result from a failure to optimize.” (Calvano et al., 2020)
A first attempt might be to say that (at least if there are other optimizers in the market) “non-responsive” algorithms which set prices completely independently of competitor prices (and consequences of competitor prices) should be permitted—indeed, if they cannot react to their competitors, then they cannot threaten them. <|MaskedSetence|> <|MaskedSetence|> Can this condition be relaxed? What if the first entrant to a market deploys (and hence commits to) an algorithm that satisfies these internal consistency conditions, and then the second entrant plays a simple policy that does not satisfy the swap regret condition, but also does not encode threats (e.g. because it is entirely non responsive)? Might we still see competitive prices? Would we, in accordance with the intuition of (Harrington, 2018; Calvano et al., 2020) see competitive prices if the follower not only did not deploy threats, but also succeeded in “optimizing”?
.
|
**A**: But this is far too limiting as it does not allow for algorithms that react to market conditions.
**B**: Recent work by Hartline, Long, and Zhang (2024) and Chassang and Ortner (2023) set out a principled definition of what constitutes non-collusive behavior of a pricing algorithm by defining algorithms that satisfy some internal consistency properties (calibrated regret, also known as “swap regret”) and then shows that if all algorithms in the market satisfy these conditions then the outcome will be competitive prices.
**C**:
“To us economists, collusion is not simply a synonym of high prices but crucially involves “a reward-punishment scheme designed to provide the incentives for firms to consistently price above the competitive level” (Harrington 2018, p.
|
CAB
|
CAB
|
CAB
|
ACB
|
Selection 3
|
The strategic behavior of real estate brokerages has been documented to leverage informational advantages. Agarwal et al. (2019) confirms that brokerages, as market intermediaries, possess nuanced knowledge of market conditions, enabling them to negotiate discounts effectively. Furthermore, Han and Strange (2015) discusses the varying bargaining power of brokerages across unidirectional and bidirectional markets, influencing their operational strategies. This is corroborated by evidence suggesting that properties listed with lower commission rates not only sell less frequently but also take longer to sell (Barwick et al., 2017). Additionally, studies have documented that brokerages may adopt discriminatory strategies, steering minorities into neighborhoods with lower economic opportunities and higher exposures to crime and pollution, thereby contributing to persistent social and economic inequalities in the United States (Christensen and Timmins, 2022). The advent of online platforms has significantly altered the landscape of real estate transactions. <|MaskedSetence|> (2003) notes that while the duration of property searches has not changed markedly, the scope of searches has broadened to encompass more online listings. Moreover, Zhang et al. <|MaskedSetence|> However, a detailed analysis of the impact of the presence of these platforms on market performance of offline stores remains scant.
Moreover, the overall market influence of real estate intermediaries is multifaceted. Utilizing a model predicated on perfect competition, Williams (1998) illustrates that excessive entry of brokers into the market can surpass the optimal allocation, thereby reducing social welfare. <|MaskedSetence|> Additionally, Qu et al. (2021) highlights the moderating role of broker commissions in disseminating information during transactions, facilitating more efficient home sales. Lastly, Agarwal et al. (2024) utilized second-hand real estate transaction data from Beijing to demonstrate that real estate agents may significantly contribute to the formation of Yin-Yang contracts. They quantified the magnitude of the resulting tax evasion, attributing it to the learning-by-doing effect and peer influence among agents. However, their study lacked a spatial analysis component that would consider the local network effects. Other related literature includes (Beck et al., 2022; Levitt and Syverson, 2008; Jud et al., 1996)..
|
**A**: This is corroborated by studies indicating that an increase in the number of brokers can depress house prices and shorten transaction cycles (Hong, 2022).
**B**: Zumpano et al.
**C**: (2021) associates the rise of online platforms with a reduction in existing house prices and an increase in sales volumes, a dynamic influenced by factors such as new house prices and household demographics.
|
BCA
|
BCA
|
BCA
|
ABC
|
Selection 2
|
When the propensity score is challenging to estimate (DGP 2 and 4), the RMSE of the DML estimator increases, primarily due to a higher bias. Calibration methods can considerably improve the ATE estimation in terms of RMSE, with Venn-Abers calibration, Platt, or Beta scaling showing the best performance. <|MaskedSetence|> <|MaskedSetence|> Calibration methods can greatly improve the results, reducing the RMSE for all three machine learners. In particular, for Lasso Venn-Abers calibration reduces the RMSE of DML by almost 60%, reaching nearly the same RMSE as more sophisticated machine learners. Calibration methods reduce the bias in the ATE estimation, which is the main reason for improving the RMSE. After calibration, the coverage of the confidence interval is close to the nominal level of 95% for all calibration methods. These patterns become even more noticeable when the propensity scores become more extreme (DGP 5). <|MaskedSetence|> The reduction stems again from a decrease in the bias.
.
|
**A**: Except for isotonic regression, all calibration methods reduce the RMSE of the DML estimator by at least 50%.
**B**: As long as the outcome regression is easy to estimate and linear (DGP 2), all estimators perform well, with Lasso being the most effective.
**C**: This underscores the practical relevance of the double-robustness property of the DML estimator: despite the difficulty in estimating the propensity score, the simple baseline main effect can be well estimated by a linear model.
Lasso performs significantly worse than the other two machine learners for highly nonlinear baseline main effects (DGP 4).
|
BCA
|
CBA
|
BCA
|
BCA
|
Selection 4
|
There has been much work on generalizations of stable matchings, including weighted stable matchings and stable matchings in non-bipartite settings (see e.g., [78, 36, 72]). <|MaskedSetence|> In this model, the correlation between the two utility values assigned to a candidate by the institutions depends on the group to which the candidate belongs. <|MaskedSetence|> In contrast, the utilities of a candidate for different institutions remain the same in our model, and hence, are always perfectly correlated. <|MaskedSetence|>
|
**A**: Instead, the distribution of the utility of a candidate is group-specific – for one of the groups, the utility values get scaled down by the bias parameter β𝛽\betaitalic_β.
.
**B**: In the context of bias,
[13] considered a stable assignment setting with two groups of candidates and two institutions where both sides have preferences.
**C**: However, the distribution of the utility of a candidate for a specific institution remains the same for both groups.
|
BCA
|
BCA
|
BCA
|
CAB
|
Selection 3
|
<|MaskedSetence|> The points are estimates for the relative size of the bias in the unconditional estimator by the effect from the expansion, and the bars are 95% confidence interval. Standard errors are clustered at the state level.
Figure 4: Estimates of Minimum Wage Increase on Teen Employment
This figure reports the estimates of the dynamic effect of the minimum wage increase on teen employment, using never-treated states as control. <|MaskedSetence|> <|MaskedSetence|> The red lines report the result with the double DiD estimators. Standard errors are clustered at the state level..
|
**A**: The points are estimated average treatment effect and the bars cover the 95% confidence interval.
**B**: The blue lines (“naive”) report the result with unconditional staggered differences in difference estimators.
**C**: This figure reports the estimates of the confoundedness between minimum wage increase ACA Medicaid Expansion, measured by the diagnostics in equation 5 using never-treated states as control.
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 1
|
For trades of size between 101 and 1000 shares, we observe consistent positive effects in 2018 across all models, with a notable impact in overall trades (1.87, p < 0.001) and predicted positive windows (1.34, p < 0.001). The year 2019 shows a moderate positive effect, particularly in overall trades (0.81, p < 0.001) and predicted negative windows (0.79, p < 0.05). However, in 2020, a negative effect is observed in predicted positive windows (-3.10, p < 0.001), suggesting reduced predictability, while a strong positive effect is noted in predicted negative windows (5.10, p < 0.001). In 2021, a significant negative impact is seen across all models, with the most pronounced effect in predicted positive windows (-2.48, p < 0.001). <|MaskedSetence|> <|MaskedSetence|> In 2021, a significant negative effect is observed across all models, with the largest impact in predicted positive windows (-6.13, p < 0.001). <|MaskedSetence|> The year 2020 exhibits the highest positive impact in predicted negative windows (15.52, p < 0.001), indicating increased predictability during this volatile period. In contrast, 2021 shows a significant negative effect, especially in predicted positive windows (-8.17, p < 0.001).
.
|
**A**: For trades greater than or equal to 10,000 shares, the interactions reveal a strong positive effect in 2018 and 2019, particularly in overall trades (8.08, p < 0.001 for 2018) and predicted positive windows (5.82, p < 0.001 for 2019).
**B**: Trades of size between 1001 and 9999 shares exhibit positive effects in 2018 and 2019, with the highest impact observed in 2018 for overall trades (4.87, p < 0.001) and predicted positive windows (3.99, p < 0.001).
**C**: The year 2020 shows a mixed effect, with a positive impact in overall trades (4.00, p < 0.001) but a negative effect in predicted positive windows (-1.45).
|
BCA
|
ACB
|
BCA
|
BCA
|
Selection 1
|
In the logit case, Eq. <|MaskedSetence|> <|MaskedSetence|> This system of fixed-point equations can be employed in an empirical framework where the discrete-choices are described by the mixed logit model or the logit random coefficient model with latent types. Conditioned on the latent types both models are given by the logit model. The logit random coefficient model was popularized by Berry et al. <|MaskedSetence|>
|
**A**: (11) simplifies to Eq.
**B**: (1995) to study the demand for differentiated products.
.
**C**: (10).
|
ACB
|
ACB
|
BCA
|
ACB
|
Selection 4
|
For Portugal, Dauphin et al. [2022] found that a simple linear regression (OLS) lowers the mean absolute error (MAE) by 53.8% from an AR model, but the ridge and LASSO regressions can lower that measure, respectively, by 15.2% and 21.5% from the OLS. In general, penalised regressions and support vector machines (SVM) perform well, but gains are less evident with random forests and neural networks. These results were obtained from a dataset for Portugal with 46 variables that covers the fields of national accounts, housing and construction, labour market, manufacturing, retail trade and consumption, international trade, financial indicatores, surveys and other (novel) data like Google searches for relevant keywords or air quality. <|MaskedSetence|> [2022], the use of PLS in forecasting is common in finance applications where market returns are estimated from many predictors. <|MaskedSetence|> <|MaskedSetence|> In this area, Huang et al. [2015] proposed a new investor sentiment index that aims to predict the aggregate stock market returns. Here, PLS was used to extract the most relevant common component from the sentiment proxies, separating out information that is relevant to the expected stock returns from the error and noise..
|
**A**: The variable of interest was the quarterly year-on-year (y-o-y) GDP growth and the study covered a period of 23 quarters from the third quarter of 2015 to the first quarter of 2021, including the outliers related with the COVID-19 recessions and recoveries.
According to the survey of Petropoulos et al.
**B**: Thus, researchers have explored other methods, namely, based on principal component analysis.
**C**: Conventional OLS is highly susceptible to overfit in these applications, which is amplified by the typical noise in stock returns data.
|
ACB
|
ACB
|
BCA
|
ACB
|
Selection 4
|
Despite its potential, leveraging response time for preference learning presents significant challenges. <|MaskedSetence|> <|MaskedSetence|> Although faster estimators exist [68, 69, 29, 26, 9], they are typically designed for parameter estimation for a single pair of options and do not aggregate data across multiple pairs. This limits their ability to leverage structures like linear utility functions, which are essential in preference learning with large option spaces [40, 55, 22, 57, 20], and cognitive models for human multi-attribute decision-making [65, 24, 77].
To overcome these limitations, we adopt the difference-based EZ diffusion model [68, 9] with linear human utility functions and propose a computationally efficient utility estimator that incorporates both choices and response times. By leveraging the relationship among human utilities, choices, and response times, our estimator reformulates the estimation as a linear regression problem. <|MaskedSetence|> We compared our estimator to the traditional method, which relies solely on choices and logistic regression [4, 30]. Our theoretical and empirical analyses show that for queries with strong preferences (“easy” queries), choices alone provide limited information, while response times offer valuable additional insights into preference strength. Hence, incorporating response times makes easy queries more useful.
.
|
**A**: This reformulation allows our estimator to aggregate information from all available pairs and be compatible with standard linear bandit algorithms [4] for interactive learning.
**B**: While these models provide valuable insights into human behavior and align with neurobiological evidence [71], they rely on computationally intensive parameter estimation methods [52, 39, 3], such as hierarchical Bayesian inference [72] and maximum likelihood estimation (MLE) [38], making them unsuitable for real-time interactive systems.
**C**: Psychological research has extensively studied the relationship between human choices and response times [16, 18] through complex models like Drift-Diffusion Models [51, 38] and Race Models [67, 12].
|
CBA
|
CBA
|
ABC
|
CBA
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> To take an elementary example, consider an attempt to inquire into an agent’s preference between apples and bananas. A modeler might simply ask her what she would choose between an “apple” and a “banana”, but these serve only as coarse descriptions that ignore the particularities: their variety, ripeness, whether they have been dropped, their country of origin, etc. So while the agent might prefer the abstract “banana” to the abstract “apple,” her preference might be overturned upon further specification: between, say, a “185 gram crisp British Gala apple” and a “slightly over-ripe and mildly dented Chiquita banana.”
.
|
**A**:
Second, even when an agent really does choose between precisely identified
outcomes, we face a methodological problem.
**B**: The analyst’s (verbal) description of the agent’s decisions will inevitably be imprecise.
**C**: This creates a wedge between an agent’s preferences and the analyst’s ability to describe, record, or investigate those preferences.
|
ABC
|
ABC
|
ABC
|
BAC
|
Selection 3
|
<|MaskedSetence|> Subjects went through 25 identical rounds of the contest game with moves sequenced, and information revealed across stages, according to the treatment. Roles were randomly assigned and fixed. At the beginning of each round, subjects within each matching group were randomly matched into groups of three. Each subject was given an endowment of 240 points and could invest any integer number of points between 0 and 240 into the contest. <|MaskedSetence|> The winner received a prize of 240 points, and all subjects lost their investments. One round of the 25 was chosen at the end of the session to base subjects’ actual earnings on, at the exchange rate of 20 points = $1.
In Part 3, we administered a short questionnaire. Subjects reported their gender, age, major, and self-assessed competitiveness measured on a Likert scale from 1 (“much less competitive than average”) to 5 (“much more competitive than average”). <|MaskedSetence|>
|
**A**: In Part 2—the main part of the experiment—subjects were randomly divided into fixed matching groups of 9 and only interacted within those groups.
**B**: After all subjects made their investment decisions, one winner within each group was randomly determined according to CSF (1).
**C**: After Part 3, earnings from all parts were calculated and revealed, and subjects were paid privately by check.
.
|
ABC
|
ABC
|
BAC
|
ABC
|
Selection 2
|
Before describing the results in Section 5, we provide a brief theoretical description of the experimental task and formulate specific conjectures. <|MaskedSetence|> In fact, we have confidence in the ecological validity of our framing. <|MaskedSetence|> <|MaskedSetence|> reliability trade-off, is one that university students face regularly in their campus lives. For example, when completing a lengthy timed exam, students must decide between answering more questions or spending more time on each question to increase the probability of answering it correctly. Moreover, none of our subjects mentioned confusion or misunderstanding in the post-experimental questionnaire.
Theoretical predictions.
|
**A**: After all, the costly evidence gathering decision, in the face of a volume vs.
**B**: Although the experiment involved several stages and required subjects to consider a non-trivial risk-return trade-off, the experiment was straightforward to explain and the underlying decisions were familiar ones.
**C**: Our empirical findings suggest that subjects’ choices deviate significantly from the theory predictions; even so, the framework provides helpful structure to understanding risk taking in the forecasting game.131313We have no evidence that these deviations are a result of misunderstanding or confusion about the experimental task.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 2
|
Through our parsimonious model, we capture both the direct and indirect effects of AI adoption across the economy. This provides a more comprehensive view of AI’s potential impact on energy use and emissions than approaches focusing solely on the energy consumption of AI hardware or specific AI applications. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> This may involve prioritizing energy-efficient AI technologies, investing in renewable energy sources to power AI infrastructure, and developing strategies to offset increased emissions in AI-intensive industries..
|
**A**: However, the cumulative effect over time and across sectors underscores the importance of considering energy and environmental impacts in AI development and deployment strategies.
**B**: Our findings indicate that, while AI adoption does increase energy use and emissions, the magnitude of this increase is relatively modest compared to overall economic activity.
**C**: Moreover, the variation in impacts across industries highlights the need for sector-specific approaches to managing the energy and environmental consequences of AI adoption.
As AI continues to transform various sectors of the economy, we must balance productivity gains and economic benefits with potential increases in energy demand and associated emissions.
|
ABC
|
BAC
|
BAC
|
BAC
|
Selection 3
|
The lack of robust models to study stakeholder interdependence has led to a scarcity of research in this area, despite its importance. Addressing this gap requires the development of more sophisticated, multi-dimensional models that can better capture the nuances of stakeholder interactions. This complexity necessitates, in our opinion, sophisticated modelling approaches, such as game theory, to predict outcomes and devise effective strategies. <|MaskedSetence|> Game theory provides a mathematical framework to analyse strategic interactions where the outcome for each participant depends on the choices of all involved (Smith (1974)). <|MaskedSetence|> In the context of innovation adoption, the replicator dynamic functions by comparing the success of individuals who adopt a new innovation to those who do not. <|MaskedSetence|> Over time, as these individuals succeed, the innovation is expected to spread through the population, much like a successful strategy spreads in an evolutionary game..
|
**A**: By modelling these interactions as games, researchers can identify potential strategies stakeholders might adopt and the possible outcomes of their decisions.
In fact, game theory can help to map out stakeholders’ preferences and predict which strategies will dominate over time (Taylor and Jonker (1978)).
**B**: The replicator dynamics of game theory considers how strategies evolve over time based on their success relative to others (Ferriere and Michod (2011)).
**C**: The idea is that if an innovation provides a higher payoff—be it economic gain, increased efficiency, improved social standing or, generally speaking, higher utility—then the individuals who adopt it will have a relative advantage over those who stick to older methods.
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 4
|
<|MaskedSetence|> Again, this is not surprising since each product uses different data and different methods to produce gridded weather data. If all that these descriptive figures showed was one EO product consistently reporting higher levels of rainfall and temperature than other products, researchers could easily adjust their methods to account for this over reporting. However, Figures 4 through 7 document not just differences in cardinality (one product reports a few more mm of rain) but in ordinality. <|MaskedSetence|> In most countries TAMSAT falls in the middle in terms of how many days are without rain, but in Uganda it reports the driest conditions. <|MaskedSetence|> In Malawi, MERRA-2 produces the fewest GDD but in Niger it produces the most. In most countries, CPC is in agreement with ERA5 but in Tanzania CPC disagrees with both ERA5 and MERRA-2. Based on these changes in the ordering of EO products across countries, we conclude that researchers’ selection of EO product cannot be country-agnostic.
.
|
**A**: In most countries ERA5 reports the most rain but in Niger it reports the least.
**B**: Summarizing the descriptive evidence: there can be substantial differences in the weather reported by each EO product.
**C**: And while compared to rainfall, there is much more agreement among the EO temperature products, there are still significant deviations, particularly in the number of GDD produced by the products.
|
BAC
|
BAC
|
BAC
|
BCA
|
Selection 3
|
Our empirical application raises the question of whether the advent of the Euro affected the heterogeneity in the housing price dynamics across the Eurozone countries. To address this question, contemplating additional measures of heterogeneity for information enrichment can improve the discriminatory power of the aforementioned penalised estimators. <|MaskedSetence|> <|MaskedSetence|> To this end, one could apply our testing principle to causal inference problems or multivariate time series, potentially using adaptive penalty weights derived using the de-sparsified Lasso. <|MaskedSetence|>
|
**A**: We are currently investigating this approach..
**B**: This topic is closely related to the heterogeneous treatment effect inference literature, which could inspire further extensions.
Another promising direction is inference in high-dimensional regressions for which the double Lasso of \textciteBellonietal2014 has become a standard method.
**C**: It would be appealing to investigate whether information enrichment furthermore improves high-dimensional inference for which the wild bootstrap has become a cornerstone \parencite[cf.][]Chernozhukov2023.
|
BCA
|
BCA
|
BCA
|
BAC
|
Selection 2
|
In a final step, we initialize a dynamic disequilibrium input-output model introduced in Pichler, Pangallo, del Rio-Chanona, Lafond and Farmer (2022) to assess the total (direct + indirect) impacts on the Austrian economy. In particular, we use the economic model to estimate total aggregate impacts on output, profits, wages and salaries, and private consumption and investigate sectoral impacts.
It is important to note that any analysis of the economic impacts of a drastic gas supply reduction involves a high level of uncertainty. <|MaskedSetence|> Our results show that the expected gas reductions to industry are highly different for the two main scenarios considered. <|MaskedSetence|> <|MaskedSetence|> In Scenario B, where we assume no coordination among EU-member states, we estimate a country-wide gas supply shock of about 24%, resulting in a 53% reduction of gas supply for economic production. Consequently, economic impacts vary substantially across the two scenarios. For the cooperative Scenario A and the uncoordinated Scenario B, we estimate that direct output shocks due to the gas supply shock amount to 1.1% and 5.6%, respectively. When feeding these direct output shocks into our sectoral economic model, we find amplification effects of about 1.5, resulting in short-term total economic impacts of up to -1.7% and -8.4% for Scenario A and Scenario B, respectively..
|
**A**: As a small, open, and landlocked economy, Austria would disproportionately benefit from EU-coordinated efforts to secure alternative gas supplies.
**B**: Consequently, we focus only on the short-term effects of a drastic gas supply shock and simulate the adverse economic impacts over a 2-month horizon in the immediate aftermath of the gas supply reduction.
Our approach integrates the different policy-relevant mitigation factors with an economic model, allowing us to assess the relative effectiveness of specific policy measures.
**C**: In this scenario (Scenario A), we estimate a country-wide gas supply shock of about 5% after mitigation measures have been considered, translating into a 10% reduction of gas supply for economic production.
|
BAC
|
BAC
|
CAB
|
BAC
|
Selection 4
|
5 Transition dynamics
We have thus far focused on limit play. <|MaskedSetence|> We will consider the iid signal case such that the learning process Σ(σ)Σ𝜎\Sigma(\sigma)roman_Σ ( italic_σ ) is parametrized by the standard deviation of each period’s signal σ>0𝜎0\sigma>0italic_σ > 0. <|MaskedSetence|> <|MaskedSetence|> The following result shows that the path of aggregate play exhibits qualitatively different behavior depending on learning rates..
|
**A**: We now turn to studying the connection between the speed of learning and the time path of aggregate play.
**B**: Within this iid learning environment, recall that Corollary 2 (i) implies that the limit action is risk dominant for all fundamentals and initial conditions.
**C**: We focus on the iid case because it is a canonical model of learning, and makes results easier to state since ΣΣ\Sigmaroman_Σ is now parametrized by a scalar; nonetheless, the qualitative features of our results extend beyond this case.
|
ACB
|
ACB
|
ACB
|
BCA
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> The social cost of carbon increases with the assumed severity of climate change. If the impact of 2.5 warming is 1% of GDP worse, the social cost of carbon increase by $20/tC. The shape of the impact function matters too. Commonly used impact functions find a lower social cost of carbon than a smorgasbord of “other” or undefined impact functions (the base category)—but the popular impact functions do not lead to results that differ from other often-used functions. Studies that adopt the findings of econometric estimates of the impact of weather shocks on economic growth (Dell et al., 2012, Burke et al., 2015) report higher social cost of carbon estimates Moyer et al. <|MaskedSetence|> The assumed temperature in 2100 does not affect the reported estimate either, nor does the interaction between temperature and the dummy whether a carbon tax is imposed. (Recall that a carbon tax reduces emissions and so global warming and the social cost of carbon.) A large number of studies do not report results for global warming; the sample size shrinks accordingly. This affects the size but not the sign or significance of the welfare parameters. The time trend becomes significant..
|
**A**:
Table 1 also shows a number of extended specifications.
**B**: (2014), Moore and Diaz (2015), on average $265/tC higher.
There is no statistically significant impact of imposing the estimated social cost of carbon as a carbon tax (column 3).
**C**: The second column includes the assumed economic impact of 2.5 warming.
|
ACB
|
ACB
|
BAC
|
ACB
|
Selection 2
|
A possible remedy is to correct for correlation between alternatives in the systematic part of the utility function. For example, in order to take into account that overlapping paths may not be seen as distinct alternatives, the path-size logit model (Ben-Akiva and Bierlaire, 1999) constructs correction terms for overlapping alternatives. <|MaskedSetence|> <|MaskedSetence|> This means the substitution patterns remain the same as in the case without the correction term.
Duncan et al. (2020) defines the path-size term by using probabilities thus allowing more flexibility in substitution patterns, but requiring solving a fixed-point problem to calculate the choice probabilities. <|MaskedSetence|>
|
**A**: This strongly adds to model complexity and computation time..
**B**: We note that the path-size terms are typically formulated as a function of length.
Consequently, for a given choice set, the path-size term in the systematic utility remains the same regardless of changes in the link costs.
**C**: There are several alternative ways to construct correction terms, including the generalized path-size logit (Ramming, 2001), path-size correction logit (Bovy et al., 2008), and adaptive path-size logit (Duncan et al., 2020).
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 3
|
Primiceri, 2022; Carriero
et al., 2024a; Carriero et al., 2024b).
For the cross-section dimension, we allow cross-row and cross-column correlations in idiosyncratic components. <|MaskedSetence|> We achieve this by employing a Kronecker structure in the idiosyncratic components where the covariance matrix of the vectorized error is a Kronecker product of column-wise and row-wise covariance matrices. We impose inverse-Wishart priors on the two covariance matrices, with prior means set as diagonal matrices. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: This data-driven approach offers a more flexible framework compared to exact factor models.
**B**: In addition, compared to a full covariance matrix, this Kronecker structure greatly reduces the number of parameters and improves the efficiency of our Bayesian estimation..
**C**: In our macroeconomic panel example, this implies that we allow individual risks to be correlated across countries or indicators.
|
CAB
|
CAB
|
CAB
|
ABC
|
Selection 3
|
The system sends a prompt to all participants (each representing a separate ChatGPT-4 session) outlining the double auction rules and providing the current round information. Participants must confirm their understanding of the rules. At the start of each new round, only the information about the previous transaction is updated, while the rest of the message remains the same.
Price Posting and Matching
This stage involves selecting a random session using a number generator to simulate spontaneous price-posting behavior. <|MaskedSetence|> <|MaskedSetence|> A deal occurs when a seller’s price is less than or equal to the buyer’s price, or vice versa. <|MaskedSetence|>
|
**A**: The deal price is added to the transaction history and shared in subsequent rounds..
**B**: If a participant decides to post a price, their response is recorded.
**C**: Three types of prompts are sent, depending on whether a buyer posts a price, a seller posts a price, or both do.
|
CBA
|
CBA
|
CBA
|
BAC
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> A more complex approach is represented by the Generalized Random Forest (GRF) developed by Athey et al., (2019). <|MaskedSetence|> By combining numerous trees, the GRF produces robust and consistent treatment effect estimates under the conditional independence assumption.333GRF can be adapted for continuous treatment cases. However, instead of estimating ADRFs, it provides estimates of the average partial effect. Moreover, it requires the availability of untreated units to conduct the analysis.
While both strands of literature have developed effective estimators within their respective domains, they fall short in observational settings where treatment effects are heterogeneous across units even for the same treatment intensity, and the treatment is continuous. Applying these estimators in such contexts could lead to oversimplified policy recommendations, obscuring the complex impacts of policy measures. In the following section we introduce the Cl-DRF estimator, a method expressly developed for this evaluation context..
|
**A**: In a causal tree, the aim is to achieve the best prediction of the treatment effect and to do this, the algorithm divides the data to minimize the heterogeneity of treatment effects within the leaves (i.e., the differences in potential outcomes), rather than minimizing the heterogeneity of observed outcomes within the leaves.
**B**: The GRF iterates over random subsets of the data, constructing a causal tree for each subset.
**C**:
For example, the causal trees introduced by Athey and Imbens, (2016) are decision trees tailored to uncover the heterogeneity of treatment effects.
|
CAB
|
CAB
|
CAB
|
ABC
|
Selection 2
|
A key factor contributing to the scarcity of capital market transactions and the withdrawal of capital market investors is likely their limited understanding of longevity risk, in particular, the challenges associated with forecasting longevity-linked cash flows. Additionally, the lack of liquidity in the market makes it difficult for these investors to transfer, diversify, or hedge the longevity risk they assume. These knowledge gaps and market frictions likely increase the aversion of capital market investors towards longevity risk. As discussed in Section 3.2, this increased risk aversion provides a possible explanation for the rarity of long-term longevity risk transfer transactions in the capital market. <|MaskedSetence|> This preference for short-term instruments in the capital markets is demonstrated by the mortality CAT bonds and the recent emergence of longevity sidecars in the capital market.
Another consequence of the limited understanding of longevity risk is information asymmetry in capital market transactions. <|MaskedSetence|> Existing studies have also identified moral hazard in insurance-linked seciruties (Götze and Gürtler, 2020). <|MaskedSetence|>
|
**A**: Specifically, capital market investors are concerned that those offloading the longevity risk – the global insurers and reinsurers – possess more information about the risk, leading to fears of being sold a ‘lemon’ (Blake and Kearns, 2022).
**B**: In the following section, we will show that the existence of information asymmetry could lead to the market collapse of static contracts in the capital market, while dynamic contracts remain viable under very general conditions.
.
**C**: Specifically, while buyers demand static, long-term hedging instruments, sellers are unwilling to offer such contracts and prefer short-term instruments for longevity risk transfer.
|
ABC
|
CAB
|
CAB
|
CAB
|
Selection 4
|
<|MaskedSetence|> The protocol uses a hash function to select a node’s block randomly. Each node calculates the hash of its block,303030Following the literature (Garay et al.,, 2015; Pass et al.,, 2017; Shi,, 2022), we model the hash function as a random oracle that always returns a random integer between [0,1]01[0,1][ 0 , 1 ] on a fresh input. <|MaskedSetence|> <|MaskedSetence|> This field serves the sole purpose of allowing multiple inputs to the hash function for the same block. and the block is valid if the hash is lower than the difficulty threshold D𝐷Ditalic_D.313131The difficulty threshold is periodically adjusted.
A node that finds a block with a valid hash is said to have mined a block.
A node’s mining power is the number of hash computations it can attempt per unit time..
|
**A**: A miner can calculate a new hash for the same block by changing a nonce field.
**B**: We may assume that the execution proceeds in (possibly infinitesimally small) time increments, such that a unit of mining power can invoke the hash function once per increment.
**C**: Brief description of the Nakamoto protocol
Each node (miner) organizes valid pending transactions into a suggested block.
|
CBA
|
CBA
|
CAB
|
CBA
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> (2018) acknowledged the limitation of FRT for testing only sharp null hypotheses, researchers have developed various strategies for different types of weak nulls. <|MaskedSetence|> (2016), Li et al. (2016), and Zhao and Ding (2020) investigate the null hypothesis of no average treatment effect; Caughey.
|
**A**: For example, Ding
et al.
**B**: Since Neyman et al.
**C**: Hull, 2023).
Second, beyond the network setting, this method extends randomization testing to any partial sharp null hypothesis.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 1
|
<|MaskedSetence|> In a tribe of hunters or shepherds a particular person makes bows and arrows, for example, with more readiness and dexterity than any other. <|MaskedSetence|> <|MaskedSetence|> Another excels in making the frames and covers of their little huts or movable houses. He is accustomed to be of use in this way to his neighbours, who reward him in the same manner with cattle and with venison, till at last he finds it his.” ([Smith, 1977], pp.31)
2.1 Specialized Production.
|
**A**: From a regard to his own interest, therefore, the making of bows and arrows grows to be his chief business, and he becomes a sort of armourer.
**B**: The hub-and-spoke model is a useful metaphor for Smith’s social coordination problem of achieving the social division of labor through specialized production.555“As it is by treaty, by barter, and by purchase that we obtain from one another the greater part of those mutual good offices which we stand in need of, so it is this same trucking disposition which originally gives occasion to the division of labour.
**C**: He frequently exchanges them for cattle or for venison with his companions; and he finds at last that he can in this manner get more cattle and venison than if he himself went to the field to catch them.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 2
|
Even if the data is generated by power-law distribution, we can’t still refuse the hypothesis that the data is from a log-normal distribution only if its variance is huge just by fitting the data using the package developed by [3]. So, it is not enough to get the conclusion that our empirical data comes from power-law distribution or log-normal distribution only by this statistic package.
The uniformly most powerful unbiased (UMPU) Wilks test as suggested by [9] can be used to distinguish power-law distribution and log-normal distribution. This method comes from the idea that exponentiality can be tested against normal distribution [10][11] using the saddle point approximation method and the idea that power-law distribution and log-normal distribution can be transferred to exponential distribution and normal distribution after taking log calculation to bitcoin balance data, respectively. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> As shown in Fig. 3, we can reject the null hypothesis and accept the alternative hypothesis in almost all regions of bitcoin balance except regions that include only tens of the largest value of bitcoin balance. However, the proportion of tens of the largest value of bitcoin to the total number of bitcoins in our specific time-point is less than 5%.
.
|
**A**: The test is performed as follows: Firstly, we choose a threshold for the bitcoin balance; secondly, the UPMU Wilks test is performed for bitcoin balance whose value is larger than the threshold by computing the p-value.
**B**: Though the Monte Carlo method can also be used to calculate the p-value, it is very time-consuming here because we have millions of data.
**C**: The null hypothesis for this test is that the data is distributed as a power-law, and its alternative hypothesis is that the data is distributed as log-normal.
|
CAB
|
CAB
|
CAB
|
BCA
|
Selection 2
|
<|MaskedSetence|> The data from these reports include average scores, total test takers and standard deviation in the scores for the mathematics test and the language and writing test. These reports are readily available for years 2016 onward through the College Board website. For the years prior to 2016, we collected the then released reports from the Internet Archive. We then compiled the necessary information about average performance of average high school student in the United States over time. <|MaskedSetence|> We made the appropriate conversion using the concordance table and mapped the average scores from the pre period to the post period.
The concordance table provided by the College board is in multiples of 10. This requires rounding the average SAT score for exams in the pre period to the nearest multiple of 10 before it can be mapped to the average SAT score based on the concordance table. <|MaskedSetence|> For example, if an average SAT score is 514 in the year 2009 and the score is 515 in the year 2010, 514 would be rounded to 510 and 515 would be rounded to 520, before the concordance table can be used. To avoid this problem, we used a simple linear regression model to regress average SAT scores before and after the conversion in the concordance table and then linearly interpolate the average SAT scores, which we call the Concordance SAT scores..
|
**A**:
The College Board provides yearly reports with the population level SAT performance by the cohort of high school students taking the SAT exam.
**B**: Due to this, the direct conversion of SAT scores using the concordance table has certain limitations.
**C**: The summary estimate of the average SAT performance and standard errors are provided in figure.
|
ACB
|
ACB
|
ACB
|
BCA
|
Selection 2
|
Categorization, or the arrangement of objects according to some rule, is a central part of our interaction with the world. An influential work that studies this formally is Gärdenfors (2004). There, a central idea is the notion of a conceptual space–in economics, a state space–and its division into convex sets, each containing a prototype, i.e., a central object. For example, one can think of colors: “red” describes a whole family of colors, as does “blue” and so on. <|MaskedSetence|> He also notes that one can instead start with a finite set of prototypes then partition the conceptual space by grouping points together that are closest to each prototype. When the notion of closeness is the Euclidean distance, this corresponds to a particular type of convex partition of the conceptual space, a Voronoi tessellation.
These constructions seem reasonable and realistic, yet the central properties are exogenous. <|MaskedSetence|> Here, we use our earlier results to argue that these properties are natural outcomes of a decision-maker’s (DM’s) optimization when she faces limits about the amount of information she can acquire or process. We provide two different micro-foundations for categorization. <|MaskedSetence|> In the first derivation, we show that this categorization emerges as information becomes sufficiently cheap when a decision maker (DM) faced with a decision problem acquires information flexibly. In the second derivation, we show that this phenomenon emerges when a DM must instead store information in finitely many bins. We show that the optimal way of doing so is precisely categorization.
.
|
**A**: In both the outcome is a convex partition of the state space in which there exists a single representative object (prototype) per partition element.
**B**: That is, convexity of the regions, or existence of the prototypes, is/are assumed.
**C**: Gärdenfors notes several ways of producing this phenomenon: one can define a natural property as a convex region of the conceptual space, in which case the prototype in each region is the central object in each region.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 1
|
<|MaskedSetence|> The date of each speech was converted into a Date format, and the data was segmented into quarters using the as.yearqtr function. This segmentation facilitates the analysis of sentiment trends over time.
Specifically, I deployed Natural Language Processing (NLP) technique to analyze the data. In NLP, a corpus is defined as a structured collection of texts used for statistical analysis and machine learning. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: To prepare the data, I first filtered the dataset to include only speeches from the United States.
**B**: These steps include:
.
**C**: The corpus in this study was created from the speech texts, and it underwent several preprocessing steps to standardize and clean the data.
|
ACB
|
ACB
|
ACB
|
BAC
|
Selection 2
|
The first part of our analysis explores whether smoke control areas served their intended purpose of reducing air pollution, especially black smoke emissions, by estimating the impact of the introduction of SCAs on local pollution levels. The second part then explores the long-term consequences of smoke control on individuals’ human capital and health outcomes. The unit of interest in these two sections is the pollution station and the individual, respectively. For brevity, this Section refers to both as ‘unit’. <|MaskedSetence|> In our main analysis, we define the latter as units in non-adopting CBs as well as units located outside SCA boundaries but in adopting CBs. Since the latter may be affected by spillover (downwind) effects from neighbouring SCAs, we also run our analysis dropping these units altogether and find very similar results.
Our ‘event’ of interest is the date on which SCAs are submitted to the Ministry. <|MaskedSetence|> First, the proposed operation date was published at the time of submission, indicating when the area is most likely to become smokeless. <|MaskedSetence|> Hence, households started requesting conversions immediately after the submission date.
.
|
**A**: Second, local councils reimbursed costs associated with stove conversions only if they were incurred before the operation date (see Section 2).
**B**: Our identification exploits spatial and time variation in the roll-out of SCAs, comparing the outcomes of these ‘units’ (i.e., pollution levels or individual-level outcomes) located in a SCA before and after its creation, relative to those of a control group of ‘never-treated’ units.
**C**: We focus on this date for two reasons.
|
BCA
|
CAB
|
BCA
|
BCA
|
Selection 3
|
The rapid evolution and commercial integration of large language models show similarities to the early days of search engines, where the line was blurred regarding whether the same advertising rules applied to search engines as traditional advertising and how they should be implemented. This ambiguity led the FTC to write the “2002 Search Engine Letter” to provide clarity and guidance on the principles for non-deceptive practices. <|MaskedSetence|> Moreover, it is vital to identify edge cases that could be subject to a complete ban (e.g., AI-driven toys with conversational capabilities advertising to children). <|MaskedSetence|> <|MaskedSetence|>
|
**A**: At the same time, policymakers can support scientifically sound research to detect undesirable steering of conversations and use these tools to certify and audit chatbots.
**B**: Further research is crucial here as effective audits are inherently more complex compared to deterministic rankings given the probabilistic nature of LLMs and biases embedded in their training data..
**C**: We believe a similar approach is now necessary to ensure that conversational chatbots do not deceive consumers, whether explicitly or implicitly [14].
Regulators must act and specify the rules to prevent deception that may ultimately harm consumers.
|
CAB
|
ACB
|
CAB
|
CAB
|
Selection 4
|
<|MaskedSetence|> Despite InhouseCompany’s high revenue and substantial return on equity (ROE), its contribution margin and net profit margin are considerably lower than those of PrivateCompany. This indicates that InhouseCompany is less efficient at converting revenue into profit, which is a critical measure of operational efficiency. <|MaskedSetence|> The higher costs could mean that public bodies are not getting the best value for their money, which undermines the rationale for using an in-house entity. <|MaskedSetence|>
|
**A**: Public agencies aim to optimize their budgets to maximize the benefits for the community, and paying more for services that could be obtained more cost-effectively from the market contradicts this goal..
**B**:
These elevated costs directly impact InhouseCompany’s profitability and cost-effectiveness.
**C**: Additionally, InhouseCompany’s quick ratio, while adequate, is significantly lower than PrivateCompany’s, suggesting less liquidity and potentially greater financial vulnerability.
Given that InhouseCompany operates on public funds, these inefficiencies are particularly concerning.
|
BCA
|
BCA
|
BCA
|
CBA
|
Selection 2
|
Still, we can encounter situations in which the loss differential is highly autocorrelated even in the presence of a prediction with weakly dependent forecast errors and in samples of moderate size. This can happen when the DM test is used to compare the predictive ability of a selected forecast against a naive benchmark.
This is a common practice, as naive benchmarks are cost-effective and readily available at any time, so they provide a standard reference for comparisons. Using simple benchmarks allows us to understand the added value of a specific forecasting technique, as it is desirable that predictions from sophisticated forecasting methods (for example complex models or expensive surveys) are more accurate than naive benchmarks. However, naive forecasts may in some cases generate relevant autocorrelation in the loss differential.
In this paper, we study the performance of the DM test when the assumption of weak autocorrelation of the loss differential does not hold. <|MaskedSetence|> <|MaskedSetence|> Local to unity, however, seems well suited to derive reliable guidance when the sample is not very large, as it is the case in many applications in economics. With this definition the strength of the dependence is determined also by the sample size: a stationary AR(1) process with root close to unity may be treated as weakly dependent in a very large sample, but standard asymptotics may be a poor guidance for cases with smaller samples, and local to unity asymptotics may be more informative. We show that the power of the DM test decreases as the dependence increases, making it more difficult to obtain statistically significant evidence of superior predictive ability against less accurate benchmarks. We also find that after a certain threshold the test has no power and the correct null hypothesis is spuriously rejected. \CopyR2c7tThese results caution us to seriously consider the dependence properties of the loss differential before the application of the DM test, especially when naive benchmarks are considered. <|MaskedSetence|>
|
**A**: In this respect, a unit root test could be a valuable diagnostic for the preliminary detection of critical situations..
**B**: We characterise strong dependence as local to unity as in \citeasnounPhillips1987LocUnity and \citeasnounPhillipsMagd2007MildlyIntegrated.
**C**: This definition is at odds with the more popular characterisation in the literature that treats strong autocorrelation and long memory as synonyms.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 3
|
value V=∑ixivi=30𝑉subscript𝑖subscript𝑥𝑖subscript𝑣𝑖30V=\sum_{i}x_{i}v_{i}=30italic_V = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 30 but leaves three units of
capacity unoccupied since the total weight is ∑ixiwi=12subscript𝑖subscript𝑥𝑖subscript𝑤𝑖12\sum_{i}x_{i}w_{i}=12∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 12. <|MaskedSetence|> Panel (b) shows an alternative way of filling the
knapsack with items that reach the knapsack weight capacity W=15𝑊15W=15italic_W = 15. Here, the rejected item is i=4𝑖4i=4italic_i = 4. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: The quality of each item is represented by each.
**B**: However, this knapsack
is suboptimal because its total value is V=∑ixivi=28<30𝑉subscript𝑖subscript𝑥𝑖subscript𝑣𝑖2830V=\sum_{i}x_{i}v_{i}=28<30italic_V = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 28 < 30.
**C**: The rejected item i=5𝑖5i=5italic_i = 5 is depicted as a dot-dashed
rectangle.
|
CBA
|
CBA
|
CBA
|
ACB
|
Selection 1
|
The first alternative competition setting assumes that players would have to simultaneously submit their planned attempt weights for all attempts before the first attempt and commit to the plan. <|MaskedSetence|> While introducing this as a formal rule is unrealistic, such a commitment is common at the individual level. We refer to this scenario as the “no pressure during attempt” setting.
The second alternative competition setting posits that players could not observe the attempt weights and outcomes of their rivals after the competition begins. This removes the influence of sequentially updated outcomes from rivals on the success probability. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Similar to the first scenario, formalizing this rule would be challenging, though such a commitment is feasible at the individual level.
**B**: This setting eliminates the possibility of pressure from rivals influencing weight attempts.
**C**: We refer to this scenario as the “no pressure during lifting” setting..
|
CAB
|
BAC
|
BAC
|
BAC
|
Selection 3
|
The remainder of the paper is organized as follows. Section 2 introduces the novel conditional tail index estimation framework. It is shown that OLS can be easily implemented, and the resulting least-squares estimators’ properties are contrasted with those of the maximum likelihood tail index estimator of Wang and
Tsai (2009) which has been extended by Nicolau
et al. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> All proofs of the results presented in the main text, as well as plots of the covariates used in the empirical section are collected in the Supplementary Material part of the paper.
.
|
**A**: (2023).
**B**: In Section 4 an empirical analysis of the impact of several covariates on 23 commodities’ returns left- and right-tail indexes is provided; and Section 5 concludes the paper.
**C**: Section 3 presents a Monte Carlo evaluation of the procedures and a discussion of the simulation results.
|
ACB
|
BCA
|
ACB
|
ACB
|
Selection 3
|
<|MaskedSetence|> maize milling, refrigeration, water storage equipment, etc.) [1]. <|MaskedSetence|> Nonetheless, about 86% of the users are coming from one of those treated areas, so any comparison with untreated areas would be unbalanced.
Chamas are informal cooperative groups used to pool and invest savings and typical in East Africa, which play a significant role in the Sarafu network [1, 23, 55, 31, 32]. In the data, they are identifiable as group accounts (formal savings group) or savings business accounts (informal savings groups), but they are excluded in the analyses carried out in this paper. <|MaskedSetence|>
|
**A**: In fact, this paper is only focused on user accounts because the main aim is to identify individual strategy in engaging with the Sarafu economic network..
**B**:
It is also important to mention that since 2017 the Kinango (Kwale county) area has been targeted by Grassroots Economics for specific development interventions: donations have been collected to build community-owned assets with the purpose of enhancing community socio-economic resilience (e.g.
**C**: In fact, in the data, Kinango, Mukuru kwa Njenga slum, and Kisauni are the most active geographic areas.
|
BCA
|
BCA
|
BCA
|
ABC
|
Selection 1
|
The first part of Theorem 1 states that, it is always without loss of optimality to restrict to feasible paths which form an NSG at each period. The second part establishes that, if the designer cares about the utility generated at some period, then the formed network must be an NSG among any optimal paths. <|MaskedSetence|> (2016) shows that, if the designer’s objective is to maximize the sum of KB centrality or the sum of the square of KB centrality, and the designer is farsighted, then the finally formed network must be an NSG. Theorem 1 extends the main result of Belhaj
et al. <|MaskedSetence|> First, among all discount functions (not only those assigning strictly positive weights only on the final period), any network along optimal formation paths (not only the finally formed network) is an NSG. <|MaskedSetence|>
|
**A**: Recall that the main result of Belhaj
et al.
**B**: (2016) in two aspects.
**C**: second, a more general class of preferences is allowed.
.
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 1
|
The relationship between workplace autonomy and self-esteem has been widely documented in the literature. Schwalbe (1985) highlights that ’workplace autonomy is positively associated with self-esteem’ (p. <|MaskedSetence|> <|MaskedSetence|> Furthermore, autonomy can influence self-esteem through various sources of self-evaluative information, such as social comparisons and perceptions of one’s behavior (Schwalbe, 1985).
Several factors can influence workplace autonomy, including company size, labor income, tenure in the company, workplace location, and job satisfaction. <|MaskedSetence|> Additionally, workplace autonomy can vary significantly across different economic sectors and job types (Lopes, Lagoa, & Calapez, 2014)..
|
**A**: Studies have shown that organizational decentralization and the delegation of authority can increase the perception of autonomy among employees (Friedman, 1999).
**B**: Autonomy allows workers to feel they have control over their work environment, which contributes to higher self-esteem and a positive perception of their own competencies and abilities.
**C**: 519).
|
CBA
|
CBA
|
CBA
|
BCA
|
Selection 3
|
There are two types of correlated effects that impact upon identification in peer effect studies: Common shocks and endogenous peer selection (Bramoullé et al., 2020). Common shocks refer to exogenous shocks that impact all individuals across the entire network, leading to a change in individuals’ behaviour. <|MaskedSetence|> To address the issue of common shocks, we apply time fixed effects to absorb common shocks at each time period. <|MaskedSetence|> Endogeneity arises from self-selection due to unobserved homophily, the tendency to select peers who share similar characteristics (McPherson et al., 2001; Jochmans, 2023). If unobserved shared characteristics between an individual and her peers are correlated with individual and peer outcomes, bias arises within peer effect estimates. <|MaskedSetence|>
|
**A**: To address the issues of endogenous peer selection, we apply individual fixed effects (Nair et al., 2010; Ma et al., 2015; Bramoullé et al., 2020).
.
**B**: Endogenous peer selection refers to the self-selection of peers by an individual who form said individual’s reference group.
**C**: If unaccounted for, this change of behaviour resulting from a common shock, will confound peer effect estimates as it will be unknown as to whether the common shock or the peer effects led to a change in an individual’s outcome.
|
CBA
|
CBA
|
BAC
|
CBA
|
Selection 4
|
<|MaskedSetence|> More generally speaking, as the sampling frequency increases and the time interval of a discrete-time Markov process goes to zero, it weakly converges to a diffusion process. The related research goes back to Stroock and Varadhan (1979), Kushner (1984), Ethier and Kurtz (1986). <|MaskedSetence|> <|MaskedSetence|> This idea is known as Quasi Approximate Maximum Likelihood (QAML) estimation (see e.g. Barone-Adesi et al., 2005; Fornari and Mele, 2006; Stentoft, 2011; Hafner et al., 2017).
In the context of the SD model, Buccheri et al. (2021) explored the continuous-time limit of SD volatility models, obtaining a bivariate diffusion where the two Brownian motions are always independent. While this result recover Nelson’s limit in the case of normal density, it actually fails to capture the well-known Heston model. In the Heston model, the Brownian motions that drive returns and volatility are (negatively) correlated, a key feature that characterizes the leverage effect in the market.
.
|
**A**: In this sense, (1.2) is also known as the continuous-time GARCH model in some financial literatures.
**B**: Therefore, it can be seen that discrete-time models are intricately related to its approximate continuous-time counterpart.
**C**: A useful insight is that by estimating the parameters of a discrete-time model, one can recover the parameters of the continuous-time process it approximates.
|
ABC
|
ABC
|
BCA
|
ABC
|
Selection 4
|
same in both states for every consumer type M𝑀Mitalic_M. Consequently, aggregate
demand is the same in both states. Turning to the supply side, by assumption
S(θ)≠S(θ′)𝑆𝜃𝑆superscript𝜃′S(\theta)\neq S(\theta^{\prime})italic_S ( italic_θ ) ≠ italic_S ( italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ). <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Therefore, the L.H.S of (1) is different in the two states, such that π⋆(θ,ϕ)≠π⋆(θ′,ϕ)superscript𝜋⋆𝜃italic-ϕsuperscript𝜋⋆superscript𝜃′italic-ϕ\pi^{\star}(\theta,\phi)\neq\pi^{\star}(\theta^{\prime},\phi)italic_π start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ( italic_θ , italic_ϕ ) ≠ italic_π start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ( italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_ϕ ).
**B**: Thus, supply is
different in the two states while the price is the same.
**C**: This can only be
consistent with market clearing if demand is flat around ϕitalic-ϕ\phiitalic_ϕ in θ𝜃\thetaitalic_θ.
|
CBA
|
ABC
|
ABC
|
ABC
|
Selection 3
|
4.1 Data
We assemble our data set from different sources. <|MaskedSetence|> Louis 222Accessible through https://fred.stlouisfed.org/. <|MaskedSetence|> Energy Information Administration (EIA) 333Accessible through https://www.eia.gov/opendata/. <|MaskedSetence|>
|
**A**: Energy consumption data are proxied by the so-called total primary energy consumption measured in quadrillion British thermal units (BTU), which is made freely available by the U.S.
**B**: Crucially, we consider all the time series with a quarterly frequency in the period 1973-2018.
.
**C**: We obtain real GDP and Gross Fixed Capital Formation (investment hereafter) from the Federal Reserve Economic Data (FRED) as made available by the Federal Reserve Bank of St.
|
CAB
|
CAB
|
CAB
|
CBA
|
Selection 3
|
A handful of papers have explored theoretical models where learning takes place through a sequence of randomly generated games. In a seminal contribution, LiCalzi, (1995) studied fictitious-play dynamics. In his model, agents best respond upon observing which game they are playing, but beliefs over an opponent’s behaviour are formed by pooling their actions in all past games.555Relatedly, a substantial literature has also studied players who form a coarse view (or have a misspecified model) of the behaviour of the opponents. See Jehiel, (2005) for a recent survey. Steiner and Stewart, (2008) model the play of games have never been seen before by equipping the space from which games are drawn with a similarity measure. Players best respond to their learned belief about the behaviour of opponents, which is determined by a weighted average of behaviour on past play based on the measure of closeness between games. Mengel, (2012) studies players engaging in adversarial reinforcement learning over both how the best partition a finite space of games, subject to a convex cost of holding finer partitions, and which behaviour to adopt in each element of the partition. Her approach with its endogenous partitioning of games is close in spirit to ours.
However, her assumption that the set of possible games is finite allows learning to take place game-by-game when partitioning costs are small, which is in contrast to both Steiner and Stewart, (2008) and us.
Our neural network provides a unique play recommendation for any possible game as a result of a competitive learning process. <|MaskedSetence|> In Selten et al., (2003), competing groups of students were asked to write programs able to offer a play recommendation for any game. <|MaskedSetence|> Programs were updated by the students and tested again, and the process was repeated for for five iterations. The software produced by the students in this fashion in the course of a semester ended up failing to compute Nash equilibrium in games with only one mixed strategy but achieved 98% success in choosing a pure Nash in dominant solvable games. <|MaskedSetence|> Recently, Lensberg and Schenk-Hoppé, (2021) have pursued a related idea computationally, but using genetic algorithms rather than students. A randomly mutating population of rules to play games compete at each iteration and more successful rules reproduced more. While Lensberg and Schenk-Hoppé, (2021) agree with us regarding the selection of the risk-dominant equilibrium in 2×2222\times 22 × 2 games, both they and Selten et al., (2003) conclude, in contrast to us, that the identified average heuristic at convergence is not, in general, a refinement of Nash..
|
**A**: When faced with multiple pure equilibria, the utilitarian one was favoured.
**B**: An analogous approach has been followed by others relying on different methodologies.
**C**: The programs were then competitively tested in a set of randomly generated games and feedback was provided to the students.
|
BCA
|
BCA
|
CBA
|
BCA
|
Selection 4
|
<|MaskedSetence|> Research in social psychology (Meegan, 2010; Różycka-Tran, Boski, and Wojciszke, 2015; Davidai and Ongis, 2019) documents the prevalence of zero-sum thinking. Within economics, Chinoy et al. <|MaskedSetence|> <|MaskedSetence|> Bergeron et al. (2024) use an evolutionary model to show that zero-sum interactions lead to belief systems that demotivate effort..
|
**A**: (2021) also find evidence suggestive of zero-sum thinking in that across a range of experimental treatments on adverse and advantageous selection, subjects distrust better-informed partners who have conflicting interests but fail to trust those with aligned interests.
**B**:
Related Literature:
This paper contributes to the understanding of distrust and zero-sum thinking in politics, an issue that has been studied from various perspectives.
**C**: (2024) find that both Democrat and Republican voters engage in zero-sum thinking, and that those who exhibit a greater tendency to do so also support more redistribution and stricter immigration controls.101010Ali et al.
|
BCA
|
BCA
|
BCA
|
CAB
|
Selection 2
|
For high-dimensional settings, there are two key challenges related to the proliferation of the VAR coefficients. <|MaskedSetence|> In many large-scale applications, there are far more VAR coefficients than the number of observations, which makes it necessary to regularize these VAR coefficients. Second, due to the proliferation of VAR coefficients, sampling them tends to be very computational intensive.
These two related challenges are typically tackled separately in the literature. <|MaskedSetence|> <|MaskedSetence|> The computational challenge is addressed by developing efficient MCMC or variational methods to sample the large number of VAR coefficients (CCM19; CCCM22; GKP23; BBB24). We instead take an alternative approach that tackles these two challenges simultaneously by imposing a low-rank structure on the VAR coefficients..
|
**A**: For instance, Bayesian shrinkage priors are widely used to regularize the VAR coefficients.
**B**: These include the family of Minnesota priors (DLS84; litterman86; KK93; KK97; GLP15; chan22) and various adaptive hierarchical shrinkage priors (HF19; KP19; KH20; chan21).
**C**: First, the number of VAR coefficients increases quadratically in n𝑛nitalic_n.
|
CBA
|
CAB
|
CAB
|
CAB
|
Selection 3
|
4 Methods
In this work we aim at studying the effects of cloud purhcases on firms’ sales growth. <|MaskedSetence|> one year) growth rates as the independent variable would not capture the relation at stake. Indeed, as suggested by several works (e.g. Brynjolfsson & Hitt 2003, Brynjolfsson et al. 2018, Acemoglu & Restrepo 2020, Babina et al. 2024), the effects of digital technologies diffusion may take time to materialise due to uncertainty characterising and implementation lags induced by the large and complex organisational changes associated to the adoption of ICTs. This holds true for cloud technologies as well. Indeed, as discussed in Section 2, the adoption of cloud involves a transition from investments in physical IT capital to intermediate IT costs, and therefore a complex change of the adopters’ organisational structure. The impact of this change on firm performance may therefore take several years to manifest. Moreover, the ICT surveys fail to provide precise information on the first year of cloud adoption. <|MaskedSetence|> <|MaskedSetence|> In the latter scenario, the impact of cloud usage could be negligible due to the lags in the materialisation of the impact of cloud on firm performance. However, we know that the purchase of cloud services by firms before 2009 was highly improbable in the US (Bloom & Pierri 2018), probably due to the high prices associated with cloud service provision until the 2010s (Byrne et al. 2018, Coyle & Nguyen 2018). This suggests that the adoption of cloud technology by French firms, as observed in our sample for the years 2015 and 2017, very likely started after 2009.
.
|
**A**: Consequently, an estimation involving short-term growth rates may conflate the effects for firms that have been using cloud services for several years with those that have recently adopted cloud technology.
**B**: In the former scenario, our observations would reflect not the cumulative impact of cloud usage but rather the impact of just one year, potentially leading to an underestimation of its effect on firm performance.
**C**: To this purpose, observing short term (e.g.
|
CAB
|
CAB
|
BAC
|
CAB
|
Selection 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.