text_with_holes
stringlengths 114
4.02k
| text_candidates
stringlengths 58
3.84k
| A
stringclasses 6
values | B
stringclasses 6
values | C
stringclasses 6
values | D
stringclasses 6
values | label
stringclasses 4
values |
---|---|---|---|---|---|---|
<|MaskedSetence|> <|MaskedSetence|> Recent research has shown that strong inductive biases or grounding of communication protocols are necessary for the protocol to be compositional
(see e.g. Kottur et al., (2017), Słowik et al., 2020b ). The inductive bias can be imposed into the architecture of the agents or the training procedure. <|MaskedSetence|> A model-based approach was proposed by Choi et al., (2018) and Bogin et al., (2018), who build upon the obverter algorithm
(Oliphant and Batali, (1997), Batali, (1998)).
Słowik et al., 2020a explore games with hierarchical inputs and shows how agents implemented as graph convolutional networks obtain good generalization..
|
**A**: The topic of communication is actively studied in multi-agent RL, see Hernandez-Leal et al., (2020, Table 2) for a recent survey.
**B**: Compositionality is often investigated in the context of signaling games (Fudenberg and Tirole, (1991), Lewis, (1969), Skyrms, (2010), Lazaridou et al., (2018)).
**C**: For instance,
Das et al., (2017) place pressure on agents, to use symbols consistently across varying contexts, by a frequent reset of the agent’s memory.
|
CBA
|
ABC
|
ABC
|
ABC
|
Selection 3
|
<|MaskedSetence|> In [19], it is shown how safe and optimal reward functions can be obtained, and how these are related to CBFs. The authors in [20] use CBFs to learn a provably correct neural network safety guard for kinematic bicycle models. <|MaskedSetence|> In [23], it is shown how additive and multiplicative noise can be estimated online using Gaussian process regression for safe CBFs. <|MaskedSetence|> A similar idea is followed in [25] where instead a projection with respect to the CBF condition is episodically learned. Imitation learning under safety constraints imposed by a Lyapunov function was proposed in [26]. Further work in this direction can be found in
[27, 28, 29].
.
|
**A**: The authors in [24] collect data to episodically update the system model and the CBF controller.
**B**: The authors in [21] consider that uncertainty enters the system dynamics linearly and propose to use robust adaptive CBFs, as originally presented in [22], in conjunction with online set membership identification methods.
**C**:
Learning with CBFs: Approaches that use CBFs during learning typically assume that a valid CBF is already given, while we focus on constructing CBFs so that our approach can be viewed as complementary.
|
CBA
|
CBA
|
CBA
|
ACB
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> Benefited from the distribution-free property of DCDFM, our theoretical results under DCDFM are general. Especially, when DCDFM reduces to DFM, our theoretical results are consistent with those under DFM. When DCDFM degenerates to DCSBM, our results also match classical results under DCSBM. Numerical results of both simulated and real-world networks show the advantage of introducing node heterogeneity to model weighted networks.
(c) To measure performances of different methods on real-world weighted network with unknown information on nodes labels, we propose a general modularity as an extension of classical Newman’s modularity [23]. For weighted network in which all edge weights are nonnegative, the general modularity is exactly the Newman’s modularity. <|MaskedSetence|> Numerical results on simulated network generated under DCDFM for different distributions, and empirical un-weighted and weighted networks with known ground-truth nodes labels support the effectiveness of the general modularity. By using two community-oriented topological measures introduced in [24], we find that the modularity is effective and our nDFA returns reasonable community partition for real-world weighted networks with unknown ground-truth nodes labels.
.
|
**A**: We build theoretical framework on consistent estimation for the proposed algorithm under DCDFM.
**B**: (b) To fit DCDFM, an efficient spectral clustering algorithm called nDFA is designed.
**C**: For weighted network in which some edge weights are negative, the general modularity considers negative edge weights.
|
BAC
|
CAB
|
BAC
|
BAC
|
Selection 1
|
The importance sampling enables V-Trace to be truly scalable in multi-node setups. MA-Trace enjoys the same property. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The speed is measured as the average number of steps processed per second..
|
**A**: Speed of MA-Trace training with respect to the number of distributed workers, with standard deviation shaded.
**B**: See Figure 3 and Appendix E.3.
Figure 3.
**C**: Importantly, we do not observe any degradation in the training performance when trained in the multi-node setup.
|
ACB
|
CBA
|
CBA
|
CBA
|
Selection 4
|
5.1 Models Overview
The exploration starts with an overview of how 10 RF and 10 AB models performed based on three validation metrics: accuracy, precision, and recall. The models are initially sorted according to the overall score, which is the average sum of the three metrics. This choice guides users to focus mostly on the right-hand side of the line chart (as showcased in Section Use Case). <|MaskedSetence|> All visual representations share the same x-axis: the identification (ID) number of each model. The design decision to align views vertically enables us to avoid repetition and follows the best practices. <|MaskedSetence|> <|MaskedSetence|> Then, they are divided into two groups reflecting the two algorithms. It also presents the confusion of all individual classes for the different instances when comparing two subsequent models, as illustrated in both Figure 1(a) and Figure 4(a). The width of the band between two consecutive nodes indicates the increase or decrease in confusion from one model to the other sequentially, so the smaller the height of a line, the better a model’s prediction compared to its predecessor or successor. The same effect applies to each node that absorbs the lines. With this plot, users can focus on the misclassified training instances that are more important for a given problem. For example, a medical doctor is typically cautious when dealing with false-negative instances since human lives may be at risk. Users can also check how many misclassified instances exist in each model and propagate from one model to another for each label class. Inspired by previous works, Liu2018Visual ; Wang2021Investigating we utilize a distinct visual metaphor for this plot to convey—as concisely as possible—the per class confusion for the several under examination ML models..
|
**A**: The line chart in Figure 1(a) always presents the worst to best models from left to right.
**B**: Green is used for the RF algorithm, while blue is for AB.
**C**: The y-axis denotes the score for each metric as a percentage, with distinct symbols used for the different metrics.
The confusion plot inspired by Sankey diagrams in Figure 1(a) visually maps a confusion matrix of only false-positive and false-negative values for each model into nodes with different heights depending on the number of confused training instances.
|
BAC
|
BAC
|
CAB
|
BAC
|
Selection 2
|
Various other aspects of polarization in MIMO systems have been investigated as well. Ref. <|MaskedSetence|> A MIMO system with dual-polarized antenna elements can have lower spatial diversity but higher spatial multiplexing gain than a conventional MIMO system with single-polarized antennas, particularly, in Ricean fading channels with high K𝐾Kitalic_K-factor [17]. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Various channel sounding campaigns and channel models provide insights into the characteristics of wireless channel polarization [26, 21, 22, 20, 27, 28, 23, 29, 30].
.
**B**: [16] showed that space-time block coding (STBC) with single polarization outperforms STBC with dual polarization in Rayleigh and Ricean fading channels.
**C**: It is noteworthy that the extent of benefit from dual-polarized antennas depends on the associated schemes to exploit the characteristics of polarized wireless channel [15, 16, 17, 1, 6].
|
BCA
|
BCA
|
BCA
|
BAC
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> The mean radial error (MRE) of our trained model varies from 2.9mm under the “best” template to 4.5mm under the “worst" template. It is evident that there is a large gap lying between the best and the worst choices. <|MaskedSetence|> In this paper, we attempt to fill this blank.
.
|
**A**: Thus, a selection question naturally stands out: Regarding the “gap" over samples, how to find and annotate the most “valuable" images in order to achieve the best performance with a deep model trained under such a limited supervision?
To the best of our knowledge, there is no ready answer to the above question.
**B**:
However, during our research following the work of [42], we observe an interesting phenomenon (see Figure 1).
**C**: The template choice highly impacts the final performance.
|
BAC
|
BCA
|
BCA
|
BCA
|
Selection 4
|
From now on, we use the number of communities determined by KDFSP in Table 2 for each data to estimate community memberships. We compare the fuzzy weighted modularity of DFSP and its competitors, and the results are displayed in Table 3. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Hence, DFSP runs faster than its competitors.
.
|
**A**: Furthermore, the running times of DFSP, GeoNMF, SVM-cD, and OCCAM for the Cond-mat-1999 network are 29.06 seconds, 32.33 seconds, 90.63 seconds, and 300 seconds, respectively.
**B**: Meanwhile, according to the fuzzy weighted modularity of DFSP in Table 3, we also find that Gahuku-Gama subtribes, Karate-club-weighted, Slovene Parliamentary Party, Les Misérables, and Political blogs have a more clear community structure than Train bombing, US Top-500 Airport Network, US airports, and Cond-mat-1999 for their larger fuzzy weighted modularity.
**C**: We see that DFSP returns larger fuzzy weighted modularity than its competitors except for the Karate-club-weighted network.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 1
|
Team cwmok
Mok and Chung (Mok and Chung, 2022b) proposed a 3-step registration method, which comprises an affine pre-alignment, a convolutional neural network with forward-backward consistency constraint, and a nonlinear instance optimization. First, possible linear misalignments caused by the tumour mass effect were eliminated with the descent-based affine registration method. <|MaskedSetence|> During the training phase, regions with missing correspondence were excluded from the similarity measure. <|MaskedSetence|> Finally, non-rigid instance optimization with forward-backward consistency constraint was introduced to correct solutions from the previous step that were biased because of insufficient training and discrepancy in distributions. This step further improved
the robustness and registration accuracy of initial solutions. <|MaskedSetence|> The implementation of DIRAC is available at https://github.com/cwmok/DIRAC.
.
|
**A**: The non-parametric deformation was controlled by the forward-backward consistency constraint as in the previous step and was updated using an Adam optimizer together with multi-level continuation to avoid local minima.
**B**: Second, conditional deep Laplacian pyramid image registration network with forward-backward consistency (DIRAC) (Mok and Chung, 2022a, 2021) was leveraged to jointly estimate the regions with missing correspondence and bidirectional nonlinear displacement fields for the pre-operative and follow-up scans.
**C**: This reduced the effect of the pathological regions on the registration algorithm in an unsupervised manner.
|
ACB
|
BCA
|
BCA
|
BCA
|
Selection 4
|
Example 1, despite its simplicity, illustrates one of the limitations of the CQA approach. <|MaskedSetence|> a tuple of database values) is entailed by all repairs, or is not entailed by some repair. But, as discussed in [8], the former is too strict, while the latter is not very useful in a practical context. <|MaskedSetence|> <|MaskedSetence|> Of course, to compute the relative frequency of a tuple, we need a way to compute (i) the number of repairs that entail a tuple (the numerator), and (ii)
the total number of repairs (the denominator)..
|
**A**: For instance, in Example 1, the relative frequency of the empty tuple, which corresponds to true and is the only candidate answer as the query is Boolean, is 1212\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG since, out
of four repairs in total, only two of those entail the query.
**B**: Instead, we would like to know how often a tuple is an answer, that is, its relative frequency, or, in other words, the percentage of repairs that entail that tuple.
**C**: The notion of certain answers only says that a candidate answer (i.e.
|
CBA
|
CBA
|
BAC
|
CBA
|
Selection 4
|
6.2.1 Ring-lattice graphs as planted communities
We study an example that plays a similar role to the example of Salathé and Jones [38]. In our example, setting the absorption rates of bridge nodes to larger values than the absorption rates of other nodes is analogous to removing community bridges. <|MaskedSetence|> <|MaskedSetence|> However, our adaptations of InfoMap are designed for such situations. The values that we obtain for the associated map function play the role of the modularity values in [38]. <|MaskedSetence|>
|
**A**: Because the network is the same in all stages, maximizing modularity or using the standard InfoMap algorithm cannot reveal how effective community structure changes as we change the node-absorption rates, which correspond to the recovery intensities of a disease.
**B**: They reflect an effective sparsification between planted communities due to increasing the node-absorption rates of some bridging nodes.
This effective sparsification between communities, which entails a corresponding increased isolation of communities from each other, is analogous to the more literal sparsification in [38]..
**C**: Unlike in the example of Salathé and Jones, our example uses the same network at each stage and we change the absorption rates of specific bridging nodes instead of rewiring community bridges.
|
CAB
|
ACB
|
CAB
|
CAB
|
Selection 4
|
Temporal granularity and conveyance of explanations: While the format and content of explanations have been the primary focus of XAI research, it is noteworthy to underscore that another important consideration, the time granularity of explanations, has not been well-studied in the state of the art. <|MaskedSetence|> <|MaskedSetence|> This concept has further been validated by Haspiel et al.’s user study, and human judgment shows that explanations should be delivered before action is decided rather than after it is performed [194]. <|MaskedSetence|> If the action to be performed soon is hazardous, a human driver or passenger can manually intervene in the situation with such explanations and prevent a potential danger ahead.
.
|
**A**: In general, the timing perspective of AVs explanations can be analyzed in three ways: 1) Should explanations be delivered before action is chosen or after action is performed? 2) What is the appropriate lead time for a safe transition from an automatic mode to a human takeover? and 3) Should explanations be delivered seamlessly or only when it is required? We analyze these nuances separately as follows:
1) Timing mechanism of explanations: Delivering timely explanations can help human drivers/passengers react to emergent situations, such as takeover requests, appropriately and prevent a potential danger in the vicinity.
**B**: According to Koo et al.’s study [193], it is favorable to convey explanations before a driving event is about to happen.
**C**: This judgment makes sense as on-time communication of explanations can bring situation awareness for people on board and enable them to monitor an autonomous car’s subsequent action.
|
ABC
|
CBA
|
ABC
|
ABC
|
Selection 4
|
3.1.1 Ghost Bottleneck
Similar to the basic residual block in ResNet, the Ghost bottleneck consists of two stacked Ghost modules, as shown in Fig. 2. The first Ghost module is used as an expansion layer, increasing the number of channels, and the second Ghost module reduces the number of channels to match the shortcut. Then, the input and the result from the second Ghost module are fed into the shortcut connection to generate the final output. If Stride=2, a depthwise convolution layer is inserted between the two Ghost modules. <|MaskedSetence|> <|MaskedSetence|> The SE block [34] comprises an
average pooling layer and two pointwise convolutions. <|MaskedSetence|>
|
**A**: Depthwise convolution processes the image from each channel simultaneously, and the number of feature maps after this operation is the same as the number of input channels.
**B**: It is implemented to enhance the channelwise feature responses.
.
**C**: If SE=1, the squeeze-and-excitation (SE) module is selected.
|
CAB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
8. <|MaskedSetence|> However, exploiting opponents in very large games is not trivial, and only recently was an algorithm created to exploit models in depth-limited solving. We explain the problem arising from the inability of gadgets to measure exploitability and we propose a full gadget that solves the issue. <|MaskedSetence|> Finally, we empirically evaluate the algorithms on multiple games. <|MaskedSetence|> Finally, we show that CDRNR outperforms SES in any game and can achieve over half of the possible gain without almost any exploitability.
.
|
**A**: We propose a new algorithm to quickly compute depth-limited best response and depth-limited restricted Nash response once we have a value function, creating the best performing theoretically sound robust response applicable to large games.
**B**: We show that CDBR outperforms LBR in both Leduc and HUNL and we show that CDBR performs significantly better against SlumBot than any other previous method.
**C**: Conclusion
Opponent modeling and exploitation is an essential topic in computational game theory, with many approaches attempting to model and exploit opponents in various games.
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 1
|
The PRISMA flow diagram (Moher et al., , 2009) in Figure 1 shows the process of identifica-tion, screening, eligibility, and inclusion of contributions in the final sample. <|MaskedSetence|> To conduct a MA it is crucial to select only comparable papers that provide complete information (mainly on estimated coefficients and standard errors) that can then be used to recover the average effect size 555 Our inclusion criteria prioritize studies reporting outcomes in an appropriate and consistent manner. In particular, we have excluded studies that do not rely on a complete set of objective measures. <|MaskedSetence|> This implies the exclusion of papers that do not comply with the requirements of a MA. However, those excluded papers can be of interest in building the taxonomy of the whole concerned literature, as they may play a role in building links between different contributions (see Section 3). Similarly, non-quantitative (policy, qualitative or theoretical) papers may participate as well in the development of research fronts or give a direction to a certain thread of contributions and incidentally affect the detection of clusters. <|MaskedSetence|> Our final database of point estimates for the MA includes 96 papers released between 2003 and 2020, published in an academic journal, working papers series, or unpublished studies, providing 3,904 point estimates of the effect of slow-onset events (provided by 66 studies) and 2,065 point estimates of the effect of fast-onset events (provided by 60 studies). The list of articles is in the Appendix Table 2.
.
|
**A**: These reasons led us to build our citation-based network and perform the network analysis and the community detection on the whole sample, while only the sample for the MA is restricted only to quantitative contributions that meet the coding requirements.
**B**: For instance, studies that only present estimated coefficients, solely indicating the significant level, without reporting standard errors or t𝑡titalic_t-ratios have been excluded because they do not allow for the calculation of a meta-synthesis..
**C**: It is important to note that there are two levels of inclusion: the first level identifies the sample of contributions included in our network analysis, while the second level is restricted to quantitative analyses suitable for the MA.
|
ABC
|
CBA
|
CBA
|
CBA
|
Selection 4
|
To tackle the difficulty, our primary idea is to facilitate collaboration between the meta and base levels. Specifically, we aim to leverage negative terms from both levels to handle the positive term. However, it turns out that the positive term cannot be entirely offset by the combined negative terms from meta and base levels. <|MaskedSetence|> <|MaskedSetence|> Nevertheless, another new positive term emerges due to the injected correction, which we ensure can be managed by the negative term from the base level. <|MaskedSetence|>
|
**A**: This generates a new negative term that, together with the negative term from the meta level, effectively cancels out the positive term.
**B**: As a result, the overall undesired positive term is finally addressed.
.
**C**: To address this issue, we introduce correction terms to the feedback loss and optimism in the meta-algorithm.
|
CAB
|
CAB
|
CAB
|
BCA
|
Selection 3
|
Intraclass correlation (ICC) has long been recognized as a vital reliability and reproducibility metric, especially for gauging similarity in paired data when the order of pairing is not preserved (Chen et al., 2018; Sarker et al., 2023; Solís-Lemus et al., 2023). In brain imaging, it serves as a popular baseline for test-retest (TRT) reliability assessments, often in conjunction with the Dice coefficient (Liao et al., 2013; Cole et al., 2014; Cousineau et al., 2017; Pfaehler et al., 2021; Zhang et al., 2018). The widespread use of ICC in these contexts underscores its perceived utility in evaluating consistency across imaging sessions or different imaging modalities. The conventional computation of ICC is typically through an ANOVA statistical model, which can be fairly limited and inflexible. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: The full potential and utility of the transposition-based method for ICC computation, however, remain to be explored in future research.
.
**B**: In light of these advancements, our proposed transposition-based approach for computing correlation over paired data presents a novel approach to computing ICC, potentially offering a faster and more efficient alternative.
**C**: Recent years have seen a shift towards mixed-effects models, which offer greater flexibility and accuracy in estimating ICC, especially in datasets with nested or hierarchical structures (Chen et al., 2018; Nielsen et al., 2021).
|
CBA
|
CBA
|
ABC
|
CBA
|
Selection 1
|
<|MaskedSetence|> ) has exactly two fixed points at
θ1=1.15subscript𝜃11.15\theta_{1}=1.15italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 1.15rad and θ2=1.85subscript𝜃21.85\theta_{2}=1.85italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 1.85rad in the interval [0,π)0𝜋[0,\pi)[ 0 , italic_π ),
where θ1subscript𝜃1\theta_{1}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is a stable fixed point. Figure 1(f)
presents a lift of the angle map. As the lift Φ(.)\Phi(.)roman_Φ ( . <|MaskedSetence|> <|MaskedSetence|>
|
**A**: ) is an.
**B**: ) is
increasing monotonically, the angle map ϕ(.)\phi(.)italic_ϕ ( .
**C**: verifies that the angle map ϕ(.)\phi(.)italic_ϕ ( .
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 2
|
The temperature response and control variables under no-anomaly scenario are shown in Figs. <|MaskedSetence|> 2, confirms the safety and stability of both strategies. <|MaskedSetence|> 2, the coolant temperature control at the boundaries show that the transient control action for StSf-C is greater than St-C. <|MaskedSetence|>
|
**A**: 1 - 2.
The temperature of the battery module from the two boundaries and mid-section, as shown in Fig.
**B**: In Fig.
**C**: However, in steady state both control actions are somewhat comparable.
.
|
ABC
|
BCA
|
ABC
|
ABC
|
Selection 1
|
<|MaskedSetence|> For example, Bin Morshed et al. [21] explore mood instability of 603 information workers and find that the sleep and activity duration obtained from a Garmin wearable are negatively correlated with mood instability scores. The authors compute mood instability in the study using self-reported positive and negative affect schedule (PANAS) [48]. Umematsu et al. [13] use wrist wearables that capture skin conductance, skin temperature and acceleration from 39 workers in a Japanese company over a 30-day period. Stress, mood and health scores are collected every morning, using self-reported scores from 0 to 100. The authors then try forecasting the scores for the next day using the previous day’s physiological data. They obtain a mean absolute error of 13.47, 14.09 and 18.51 while predicting stress, mood and health score respectively. For fine-grained mood classification, Zenonos et al.[26] proposed a framework that classifies eight different types of moods into five categories using sensor data such as heart rate, pulse rate, PPG, ECG, skin temperature, etc. Mark et al. [22] studied how workers’ moods were connected to the amount of time they spent on emails. They found that email usage had a negative correlation with mood balance: the more time someone spent on emails, the worse their mood became in comparison to their positive feelings.
In another study of 50 hospital workers which also uses PANAS, Nadarajan et al. [23] find that speech activity can explain some variance in predicting positive affect measure. <|MaskedSetence|> The authors extract several features from the audio to identify foreground speech. They then use a linear mixed effects model to estimate positive and negative affect from foreground activation (i.e., the percentage of recording time that foreground speech is present). <|MaskedSetence|> Sensing modalities include wearable, phone application, Bluetooth beacons and social media. Models trained on the fusion of all the features from different sensing modalities leads to up to 13.9% improvement in the symmetric mean absolute percentage error (SMAPE) when predicting affect, anxiety and sleep quality scores.
.
|
**A**: Worker’s affect, emotions and mood have also been studied in this domain.
**B**: Similarly, in another study, Robles-Granda et al. [16] utilize multiple sensing modalities to assess anxiety, sleep and affect of 757 information workers.
**C**: Employees wear a specifically designed audio badge during their work-shift hours.
|
ACB
|
ACB
|
CBA
|
ACB
|
Selection 1
|
<|MaskedSetence|> Recall that a sample in PU-Setting is comprised of a sample of PUs’ parameters (location and power) and the optimal power allocated to the SU. In SS-Setting, a training sample is comprised of spectrum sensors’ received power readings. The location of entities is available by using a GPS dongle connected to the laptops as described below, and the sensor’s received power is computed as follows. <|MaskedSetence|> Then, we compute the area under the PSD curve over the 1 MHz channel of interest (see below), and finally, convert the computed area to an appropriate unit.
Determining Labels (Optimal Power Allocated to SU). <|MaskedSetence|> To determine whether PU to PUR transmission is incurring any harmful interference from SU, we have PU continuously streaming ASCII messages over the 1 MHz bandwidth channel centered at frequency 915.8 MHz, and check if the messages are successfully received at the PUR. This end-to-end communication system is implemented using GNU Radio.
.
|
**A**: First, we compute an FFT on the I/Q samples collected within a time window to get a power spectral density (PSD) plot.
**B**: Collecting Training Samples.
**C**: We essentially do a binary search to estimate the optimal power that can be allocated to SU.
|
BAC
|
BAC
|
BAC
|
CAB
|
Selection 3
|
To a human eye, two figures look the same if they are related by a rigid motion. <|MaskedSetence|> This group is called the special Euclidean group and is denoted by SE(2)𝑆𝐸2SE(2)italic_S italic_E ( 2 ). In many applications, the congruence with respect to other groups is considered. For example, two shadows cast by the same object onto two different planes by blocking the rays of light emitted from a lamp are related by a projective transformation. <|MaskedSetence|> See [13] for an excellent exposition of the roles played by projective, (special) affine, and (special) Euclidean transformations in computer vision. <|MaskedSetence|>
|
**A**: If a light source can be considered to be infinitely far away (like a sun), then the shadows are related by an affine transformation.
**B**: Starting in the 19th century, it was widely accepted that Euclidean geometry, although the most intuitive, is not the only possible consistent geometry, and that congruence can be defined relative to other transformation groups [14].
.
**C**: However, since a reflection changes the orientation of an object, a group of orientation-preserving rigid motions, consisting of rotations and translations only, is often considered.
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 4
|
In the seminal work of [41], the method of online gradient descent is proposed for OCO problems, where at each time step the decision maker performs one gradient descent step using the latest available information. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The work [15] introduces an additional predictive step following the algorithm developed in [41], if certain conditions on the estimated gradient and descent direction are met. Similar algorithms have also been extended to cases where zeroth order [6, 30, 36, 26, 32] and second order [16] oracles are used instead of (sub)gradients. The works [6, 36] on bandit feedback consider the situation where there are time-varying inequality constraints. In such cases the algorithms proposed in [41] will be hard to implement because of the high computational resource demand of the projection operation. This motivates recent research on online optimization algorithms with time-varying constraints including the primal-dual algorithm proposed in [13, 37], a modified saddle-point method given in [9]. Other algorithms are also proposed to handle stochastic constraints [38] and cover continuous-time applications [27]. In the case where only the values rather than the exact form of the cost function at are revealed to the decision maker, bandit feedback based online algorithms [6, 36] can be used to solve the problem, other methods such as forward gradient [22] have also been proposed recently to deal with the issue. The need for applications in large-scale systems has also led to extensive research on distributed OCO. Distributed online algorithms that achieve sublinear regret bound for convex optimization problems with static constraints can be found in [28, 12, 33]. For instance, [28] proposes a distributed version of the dynamic mirror descent algorithm which is a generalization of the classical gradient descent methods suitable for high-dimensional optimization problems. The work [19] proposes distributed online primal-dual algorithms for optimization problems with static coupled inequality constraints while the work [35] studies distributed online convex optimization with time-varying inequality constraints in the discrete-time setting. For a more detailed documentation of recent advances of online optimization, we refer the readers to the survey paper [17].
.
|
**A**: Under stronger assumptions on the cost functions such as strong convexity, an improved logarithmic regret bound can be achieved [11, 23, 10].
**B**: If future information is available, it can be used to further improve the performance of the online optimization algorithm in terms of regret bounds.
**C**: A static regret upper bound that is sublinear in T𝑇Titalic_T is proved, where T𝑇Titalic_T is the length of the horizon.
|
BAC
|
CAB
|
CAB
|
CAB
|
Selection 2
|
<|MaskedSetence|> Rather than dividing along the lines of function (computation vs. communication), we propose a partitioning based on dimension (time vs. <|MaskedSetence|> We argue that analog implementation is optimal for temporal integration of signals, particularly spiking signals, while digital technologies are better suited for spatial integration. Newly developed memristive technologies [15, 14, 17, 25] provide an excellent physical substrate for temporal integration. <|MaskedSetence|> It exhibits space-time separability, as it integrates information over time, pixel by pixel, or more generally, neuron by neuron, followed by spatial integration across pixels or neurons. Space-time separability is a well-known principle in digital signal processing algorithms, offering significant implementation advantages [67].
.
|
**A**: space).
**B**: In contrast, spatial integration, which requires signal communication across space, is best achieved using digital technologies.
The memristive network we study moves in the direction of the proposed partitioning.
**C**: In this context, we advocate for a hybrid model that employs a different partitioning in the abstraction.
|
CAB
|
CAB
|
ACB
|
CAB
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> (2015); Bhawalkar
et al. (2013); Bilò et al. (2018); Epitropou et al. <|MaskedSetence|> (2016), and the results focus on equilibrium existence and social quality, measured by the price of anarchy.
In the AI and multi-agent systems community, opinion formation is studied intensively. In Auletta
et al. (2019) a co-evolutionary model is investigated, where also the innate opinion may change over time..
|
**A**: Johnsen (1990) (extending earlier work by DeGroot DeGroot (1974)) each agent has an innate opinion and strategically selects an expressed opinion that is a compromise of its innate opinion and the opinions of its neighbors.
**B**: Recently, co-evolutionary and game-theoretic variants were studied Bindel
et al.
**C**: (2019); Fotakis et al.
|
ABC
|
ABC
|
ABC
|
ACB
|
Selection 3
|
<|MaskedSetence|> (2017a) as our primary dataset and randomly selected 99,000 images. Random selection of datasets introduces an element of stochasticity that helps a model learn from different parts of data distribution for robust representation. <|MaskedSetence|> This means that all images of a particular patient were assigned exclusively to either the training or test set, but not both. <|MaskedSetence|>
|
**A**: By separating the images at the patient level, we minimized the data leakage risk and avoided introducing biases into our deep learning model.
.
**B**:
We utilized the ChestX-ray8 dataset Wang et al.
**C**: However, we implemented a patient-level dataset division to obtain the train and test set to avoid data leakage.
|
BCA
|
BCA
|
ABC
|
BCA
|
Selection 1
|
<|MaskedSetence|> For a fixed-sample problem, the Bernstein–von Mises theorem describes the asymptotic equivalence of Bayesian and frequentist inference. However, in an adaptive sampling scheme, underestimation of an arm due to the randomness of the empirical mean results in a smaller number of samples of the arm in the future. <|MaskedSetence|> <|MaskedSetence|> This is notable because it enables solutions that extend beyond the conventional one- or two-step lookahead. We demonstrate several instances in which it is feasible to perform exact analyses of dynamic programming regardless of the need to project the evolution of the posterior over extended future periods.
|
**A**: Bayesian algorithms are robust up to a polynomially small underestimation, whereas frequentist algorithms are robust up to an exponentially small underestimation.
Our results offer analytical innovation by establishing foundational principles for a formal analysis that yields exact solutions in dynamic programming.
**B**:
Note that this discrepancy between Bayesian and frequentist measures differs considerably from the situation in standard statistical inference with non-adaptive samples.
**C**: Bayesian and frequentist BAI algorithms are both robust against such randomness but with different confidence levels.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 2
|
We also run experiments on the ShapeNet dataset Chang et al. <|MaskedSetence|> We utilized 3D Steerable CNNs proposed by Weiler et al. (2018b) as equivariant encoder for the 3d voxel input space. We utilized the scalar outputs as rotation-invariant embedding (z𝑧zitalic_z) and predict (analogously to our experiments on 3d point clouds) 2 rotation-equivariant vectors to construct a rotation matrix ρ(g)𝜌𝑔\rho(g)italic_ρ ( italic_g ). <|MaskedSetence|> <|MaskedSetence|> Random rotations are also applied to the common test set. In Figure 6 we visualize a TSNE projection of the embeddings of both models. We can see a well structured embedding space for our model with distinct clusters for the different shape classes. On the other hand, the embeddings produced by the non-invariant autoencoder is less structured and one can make out different clusters for the same shape label but in different orientations. Moreover, we compared the downstream performance and generalizability of a KNN classifier on shape classification, trained on 1000 embeddings and tested on the rest. The classifier based on our rotation-invariant embeddings achieved an accuracy of 0.81 while the classifier based on the non-invariant embeddings achieved an accuracy of only 0.63.
.
|
**A**: (2015).
**B**: In Figure 11 in the Appendix we show example reconstructions of shapes from the SE(3)3(3)( 3 ) invariant representations.
**C**: Similar to our MNIST experiment, we compared the resulting embedding space to the embeddings produced by a non-invariant autoencoder model.
As the dataset comes in an aligned form (e.g., cars are always aligned in the same orientation), we additionally applied random 90 degree rotations to remove this bias (while avoiding interpolation artifacts) when training the non-invariant model.
|
BCA
|
ABC
|
ABC
|
ABC
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Hierarchical Bayesian modelling has been used for air pollution monitoring and modelling, e.g. Dawkins et al. (2020), Cocchi, Greco, and Trivisano (2007) and Sahu (2012), but not, to our knowledge, in combination with Bayesian optimisation.
.
|
**A**: 2008).
The advantage of using a hierarchical model for BO is that it can be used to transfer information about the distribution of GP hyperparameters across different contexts – such as cities – without assuming that these contexts are identical.
This is advantageous in applications like pollution monitoring where there are plentiful observations for some contexts but not others, e.g.
**B**:
Hierarchical Bayesian modelling is the principle of having layers of random variables built on top of each other (Shiffrin et al.
**C**: cities with and without extensive pollution monitoring programs, and sample efficiency is paramount as each sample is expensive to collect.
|
BAC
|
BAC
|
BAC
|
BAC
|
Selection 1
|
Note that the SINR of the combined packet is equal to the sum of the SINRs of its constituents only when the interference in each is uncorrelated. This will not be the case if a given user collides with another in more than one slot. Whether the resulting SINR will be higher or lower than the sum will depend on whether the interference adds destructively or constructively.
Interestingly, even though both are equally likely, the performance of SIC is ultimately impaired111111Consider the case with two users colliding in more than one slot and let us establish a baseline where the combined SINR is equal to the sum of SINRs in individual slots. <|MaskedSetence|> Importantly, if the combining is negative (positive), it is negative (positive) for both users. Consider now 4 possible decoding outcomes than can happen in the baseline scenario. If both users are successful, then positive MRC would have no effect. Similarly, if only one of them succeeds but the other fails, positive MRC also wouldn’t change anything since the successful user can be cancelled through SIC anyway. Only when both users fail the positive MRC can make a difference - that is if it makes at least one of them decodable and triggers SIC.
Now let us consider negative MRC. When both users fail it has no effect. If there is one successful user, it can happen that negative MRC turns it into an undecodable one, thus making SIC impossible. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: As already mentioned, in reality, the SINR after MRC will be lower (we can call it negative MRC) or higher (positive MRC) than the baseline.
**B**: Similarly, if both users are successful, negative MRC could make them both undecodable (in this case it is not enough to turn just one of them, as the SIC could still be applied).
In the end, even though negative and positive MRC are equally probable, if the system uses SIC, the negative MRC has the potential to be detrimental in three out of four cases, while the latter can only help in one of them..
**C**: To circumvent that, a more computationally-heavy equalization method such as zero-forcing (ZF) or minimum mean squared error (MMSE) would have to be used.
Conversely, in a Steiner system with t=2𝑡2t=2italic_t = 2 each user is guaranteed to collide with another at most once so the interference is always uncorrelated..
|
ACB
|
ABC
|
ABC
|
ABC
|
Selection 2
|
The continuous PAD annotations are also examined by means of the Intra-class Correlation Coefficient (ICC) based on a single-rating, absolute-agreement, two-way mixed effects model, as implemented in the icc() function from the library irr in the R software. This study is used to evaluate the inter-rater consistency of the continuous annotations (targeted or reported ones) when classifying stimuli with respect to the corresponding discrete annotations (targeted or reported ones). Based on the 95%percent9595\%95 % confidence interval of the ICC estimate, ICC index values less than 0.50.50.50.5, between 0.50.50.50.5 and 0.750.750.750.75, between 0.750.750.750.75 and 0.90.90.90.9, and greater than 0.900.900.900.90 are considered poor, moderate, good, and excellent reliability, respectively [45].
Table 4 shows the reliability ICC metrics analyzing the targeted continuous annotations regarding the targeted discrete ones (see the Targeted field). <|MaskedSetence|> <|MaskedSetence|> However, a moderated reliability (close to good) is obtained for fear and tenderness, getting good reliability for calm. This table also shows the reliability ICC metrics analyzing the reported continuous annotations regarding the reported discrete ones (see the Reported field). <|MaskedSetence|> From this study, we conclude that (1) the continuous labeling procedure is less robust than the discrete one (it could be due to the difficulty in understanding and applying the PAD metrics); (2) the continuous labeling for the reported emotions is slightly robust than the targeted one; and (3) fear is one of the most robust labeled continuous emotions..
|
**A**: It highlights the case of disgust with an even almost zero reliability.
**B**: Analyzing this table it is found poor consistency for 4444 out of 8888 of the emotions, which are amusement, anger, disgust, and sadness.
**C**: Analyzing this table, it is found that the reliability obtained is slightly better than in the targeted case with poor consistency for 5555 out of 12121212 of the emotions (amusement, attraction, contempt, disgust, and sadness), a moderated reliability for anger, hope, joy, tedium, and tenderness, and good reliability for calm and fear.
|
BAC
|
BAC
|
BAC
|
CAB
|
Selection 3
|
Figure 2. A schematic representation of two types of adversarial defense. <|MaskedSetence|> We also provide all the necessary theoretical foundations that underpin these attacks and defenses. Prior work (Demetrio et al., 2021; Li et al., 2021a) is limited to Windows OS-based devices and PDF files and only briefly discusses Android mobile malware attacks. <|MaskedSetence|> (Grosse et al., 2023) analyze attack occurrence and concerns in real-world settings and report quantitative results. Aryal et al. (Aryal et al., 2021) present a survey on adversarial machine learning for malware analysis but only focus on attacks. Berger et al. (Berger et al., 2022a) only provide a survey on problem-space evasion attacks in the Android operating system. However, given the current attention to Android malware, there is a need for a deeper and more comprehensive understanding. <|MaskedSetence|>
|
**A**: The main contributions of this paper are summarized below:.
**B**: Left: reactive, Right: proactive.
This paper is a comprehensive survey of the most significant research performed on evasion attacks and defenses for Android malware classifiers.
**C**: Grosse et al.
|
CBA
|
BCA
|
BCA
|
BCA
|
Selection 3
|
Status and trends. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> More detailed summary of their designs, applications in AD systems, and vulnerabilities are in Appendix -A. Currently, none of the existing works study the downstream AI components such as prediction and planning, which will be discussed more in §IV-D.
.
|
**A**: As shown in the tables and Fig. 6 in Appendix, most (>>>86%) of the existing works target perception, while localization, chassis, and end-to-end driving are all less or equal to 6.2%.
**B**: The targeted AI components in the existing works are summarized in Tables I and II.
**C**: Among the perception works, the two most popular ones are camera (60.0%) and LiDAR (21.5%) perception.
|
BAC
|
BAC
|
CAB
|
BAC
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Its long intervals terminate at 14.514.514.514.5 and start from 16.516.516.516.5. It also contains a Red join gadget in [1.5,13.5]1.513.5[1.5,13.5][ 1.5 , 13.5 ]. Its long intervals terminate at 1.5,7.51.57.51.5,7.51.5 , 7.5 and start from 10.5,13.510.513.510.5,13.510.5 , 13.5.
.
|
**A**: 6.
The sixth buffer contains a Blue join gadget in [18,21]1821[18,21][ 18 , 21 ].
**B**: It also contains the second part of the third switch gadget in [14.5,17.5]14.517.5[14.5,17.5][ 14.5 , 17.5 ].
**C**: Its long intervals terminate at 18181818 and start from 21212121.
|
ACB
|
ACB
|
BCA
|
ACB
|
Selection 1
|
GTEA & 50Salads: To reproduce our claims on natural videos, we retrain phase recognition models for online action segmentation on GTEA [20] and 50Salads [67].
AutoLaparo & CATARACTS: For state-of-the-art comparisons, we retrain phase recognition models on two additional surgical datasets. AutoLaparo [75] contains 21 long videos (10/4/7 split) ranging from 27 to 112 min (mean ca. 66 min) with 7 coarse phases annotated. <|MaskedSetence|> To be able to compare to the 25/25 split used in previous work [12], we use a 20/5/25 split. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: On both datasets, we report the mean video-based accuracy as well as macro-precision, -recall and -jaccard, which are computed over all test frames.
**B**: See E & F for details..
**C**: CATARACTS [91] contains 50 short videos ranging from 6 to 40 min (mean ca. 11 min) with 19 fine-grained steps annotated.
|
CAB
|
CAB
|
ACB
|
CAB
|
Selection 4
|
<|MaskedSetence|> First, from a unified perspective, CL and ML have the same purpose of approaching WDFS, except for PG. Second, CL and ML show a mismatch between two similarity distributions of sampled pairs and all negative pairs. Based on these insights, we developed UNPG by combining two PG strategies (MLPG and CLPG) to alleviate the mismatch. <|MaskedSetence|> <|MaskedSetence|> Finally, we suggest two research directions in FR: 1) pair generation strategies in the qualitative aspect and 2) loss functions considering the capability of representation power.
.
|
**A**: Filtering was also applied to remove negative pairs in both too-easy and too-hard pairs.
**B**: 5 Conclusion
This paper is based on two insights.
**C**: It was observed that UNPG increases the ability to learn existing FR models compared to MLPG and CLPG by providing more informative pairs.
|
BAC
|
BAC
|
BAC
|
BAC
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> We have investigated the use of the synthetic images obtained using examples from publicly available databases to train the model for distinguishing between AMD and healthy eyes. We found that experienced clinical experts were unable to differentiate between synthetic and real images. We have tested the model for generalisability by training the model using images from three databases and validated it using a fourth database. <|MaskedSetence|>
|
**A**: We also have demonstrated that the classification accuracy of deep learning networks marginally outperformed clinical experts in separating the AMD and Non-AMD retinal images.
.
**B**: We have compared a number of synthetic medical image generation techniques and found StyleGAN2-ADA to be the most suitable using which we have developed a method to generate synthetic images.
**C**: 5 Conclusion and Future Works
We have employed a retinal image quality assessment model in preprocessing step.
|
CBA
|
CBA
|
ABC
|
CBA
|
Selection 4
|
In the previous experiments, 1/3131/31 / 3 of whole pixels in each patch are applied as the overlapping pixels between two neighbor patches, which is a more appropriate value, since the number of pixels overlapping between the middle patch and both sides is only 2/3232/32 / 3 , and the information of 1/3131/31 / 3 of the pixels at the center of patch is still retained. If a larger number of overlapping pixels is employed, such as 1/2121/21 / 2, the middle patch will completely overlap with the patches on both sides. <|MaskedSetence|> <|MaskedSetence|> From the results, it is seen that the performance on the test set increases slowly to plateau as the number of overlapping pixels increases. It illustrates that the more the overlapping pixels are, the larger the number of network parameters are. <|MaskedSetence|>
|
**A**: If a smaller number is used, such as 1/4141/41 / 4, the number of pixels in the overlapping region will be too small to solve the problem of regional connectivity.
**B**: In order to analyze the influence of overlapping pixels between two patches, an experiment that other experimental settings are same to before is implemented based on RAF-DB dataset, and the result is shown in Table VII.
In Table VII, it shows accuracies obtained by the proposed method based on different number (N𝑁Nitalic_N) of overlapping pixels.
**C**: According to our analyses, the main reason is that it is easier to introduce redundant information between adjacent patches when the number of overlapping pixels is larger.
.
|
ABC
|
CBA
|
ABC
|
ABC
|
Selection 4
|
There are additional complications in real-world implementations of Elo. <|MaskedSetence|> For example, K𝐾Kitalic_K may be large for a new player to facilitate faster convergence of their rating to their true skill level, and may decrease over time to reduce arbitrary fluctuations for experienced players. Sometimes rating updates are batched. <|MaskedSetence|> <|MaskedSetence|> All these details and even more complications are outlined thoroughly by different organizations implementing Elo; one may read about them in, for example, the FIDE Handbook [4].
.
|
**A**: Often times all the games played at a single tournament are batched together in that manner.
**B**: That is, one accumulates their pot winnings and losses over several games, and updates their rating once at the end.
**C**: For legibility and practicality, fractional and negative rating points are avoided by scaling and shifting points up and rounding to the nearest integer, and by imposing an artificial floor on possible ratings (by gifting a player points if they would otherwise dip below the floor).
The total size of the pot K𝐾Kitalic_K may also vary depending on various factors, such as how many games each player has played before.
|
BAC
|
CBA
|
CBA
|
CBA
|
Selection 3
|
In the ML community, several methods for automatically categorizing data instances into different types exist, with a particular focus on the outlier/anomaly detection research in the past decades [CBK09, HA04]. Nevertheless, most algorithms cannot identify rare cases that are typically isolated groups, including a set of comparable data examples that deviate from the majority—rather than single isolated instances which are outliers. <|MaskedSetence|> The last category is a hybrid one, which aims to combine the benefits of the various techniques from the other categories. The problem with all the approaches, except for the density-based approaches, is the misalignment with sophisticated undersampling (e.g., NCR) and oversampling algorithms (e.g., ADASYN) that are using KNN to propose instances for removal or addition, respectively. <|MaskedSetence|> <|MaskedSetence|> Although those studies suggest that applying sampling techniques in specific types of instances (e.g., by using only outliers) can boost predictive performance, controlling which subsets of particular instance types are considered when undersampling and oversampling is an undiscovered step. This research opportunity inspired us to design HardVis.
Density-based algorithms [HHHM11, HLL08] also work well with the detection of rare categories by discovering substantial changes in data densities using a KNN search in the high-dimensional space. But how to choose the best k-value for a given data set? While it is possible to estimate the best k-value automatically by using the local outlier factor [BKNS00], the balance of the distribution of safe and unsafe instances could be off when focusing merely on rare cases and outliers. Huang et al. [HCG∗14] proposed a method for automatically selecting k-values. However, their algorithm starts with a seed depending on the target category, which is often difficult to set. iFRED and vFRED [LCH∗14] are two approaches for identifying rare categories based on wavelet transformation without the necessity of any predefined seed. Nevertheless, these methods are robust in low-dimensional data only but fail to discover the remaining types of data introduced in Section 1, which are important for HardVis. Regarding decision boundaries and borderline examples, Melnik [Mel02] analyzes their structure using connectivity graphs [MS94]. And finally, Ramamurthy et al. [RVM19] utilize persistent homology inference to describe the ambiguity (or even lack) of decision boundaries. All described methods, while being valuable, do not focus on the problem of undersampling or oversampling at all, as it happens with our system.
.
|
**A**: The majority of anomaly detection techniques can be divided into five categories: (1) classification-based [HHWB02, WMCW03, MC03], (2) density-based [BS03, BKNS00], (3) clustering-based [MLC07, VW09, SPBW12], (4) statistical-based [YTWM04, KK17], and (5) ensemble approaches [VK09, SLSH15, VC17, ZDH∗17].
**B**: Two empirical studies [SK17, NS16] that were conducted with density-based sampling algorithms deploy KNN to distinguish the type of each instance along with multidimensional scaling (MDS) [Kru64], which is a global linear dimensionality reduction algorithm.
**C**: We follow the same methodology to characterize instances based on local characteristics, but HardVis uses an interactive UMAP projection [MHM18] since it preserves mostly the local structure [EMK∗21].
|
ABC
|
ABC
|
ABC
|
CAB
|
Selection 2
|
<|MaskedSetence|> However, the current implementation employs a uniform VDF delay for all transactions, which is not ideal given the dynamic nature of Ethereum. <|MaskedSetence|> Another solution in the Miner Extractable Value (MEV) mitigation space is Radius [12], which also aims to prevent frontrunning and sandwich attacks by implementing encrypted mempools. <|MaskedSetence|> In contrast, FIRST offers seamless integration to any EVM-based blockchains.
.
|
**A**: In contrast, FIRST conducts statistical analysis and assigns VDF delay based on the network usage.
**B**: However, the Radius solution necessitates a redesign of the mempool, making its direct applicability limited.
**C**:
We propose a general-purpose solution to the frontrunning problem using cryptographic protocols such as verifiable delay functions (VDFs) and aggregate signatures [17, 19] whose outputs are publicly verifiable.
Slowswap [11] utilizes VDFs to introduce delays for transactions related to AMMs only.
|
CAB
|
CAB
|
BAC
|
CAB
|
Selection 2
|
<|MaskedSetence|> Many motivating applications for learning with strategic behavior, such as college admissions and hiring, are precisely settings where the decision maker is capacity-constrained. Competition for the treatment arises when agents are strategic and the decision maker is capacity-constrained, complicating estimation of the optimal policy.
We adopt a flexible model where agents are heterogenous in their raw covariates and their ability to modify them. Depending on the context, strategic behavior may be harmful, beneficial, or neutral for the decision maker. In some applications, strategic behavior may be a form of “gaming the system,” e.g. <|MaskedSetence|> In other applications, the decision maker may want to accept such agents because the agents who would benefit the most from the treatment are those who can invest effort to make themselves look desirable. Lastly, as demonstrated by Liu et al. <|MaskedSetence|> Our model permits all of these interpretations because we allow for potential outcomes to be flexibly related to the agent’s type..
|
**A**: cheating on exams in the context of college admissions, and the decision maker may not want to assign treatment to agents who have high ability to modify their covariates.
**B**:
Although many recent works focus on learning in the presence of strategic behavior, learning in the presence of capacity constraints and strategic behavior has not previously been studied in depth.
**C**: (2022), when all agents have identical ability to modify their covariates, the strategic behavior may be neutral for the decision maker because it does not affect which agents are assigned treatment.
|
ABC
|
BAC
|
BAC
|
BAC
|
Selection 4
|
5.4 Evaluation on Other Architectures
To examine if the proposed inductive biases improve bias-resilience in other architectures too, we created OccamEfficientNet-B2 and OccamMobileNet-v3 by modifying EfficientNet-B2 [65] and MobileNet-v3 [28, 29]. <|MaskedSetence|> <|MaskedSetence|> MobileNet-v3: 40.4) and COCO-on-Places (OccamEfficientNet-B2: 39.2 vs. EfficientNet-B2: 34.2 and OccamMobileNet-v3: 40.1 vs. <|MaskedSetence|> The gains show that the proposed modifications help other architectures too.
.
|
**A**: EfficientNet-B2: 34.4 and OccamMobileNet-v3: 49.9 vs.
**B**: MobileNet-v3: 34.9).
**C**: OccamNet variants outperform standard architectures on both Biased MNISTv2 (OccamEfficientNet-B2: 59.2 vs.
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 2
|
Most of the existing VSS methods utilize the optical flow to capture temporal relations [11, 14, 13, 35, 37, 38, 40, 34, 42, 78, 79]. <|MaskedSetence|> Among them, some works aim at improving the segmentation accuracy by exploiting the temporal relations using the optical flow for feature warping [11, 13, 14] or the GAN-like architecture [80] for predictive feature learning [12]. The other works aim at improving the segmentation efficiency by using temporal consistency for feature propagation and reuse [38, 41, 37, 40], or directly reusing high-level features [37, 39], or adaptively selecting the key frame [34], or propagating segmentation results to neighbouring frames [42], or extracting features from different frames with different sub-networks [36], or considering the temporal consistency as extra training constraints [35]. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: These methods usually adopt different smart strategies to balance the trade-off between accuracy and efficiency [78, 79].
**B**: Zhu et al. [81] utilized video prediction models to predict future frames as well as future segmentation labels, which are used as augmented data for training better image semantic segmentation models, not for VSS.
**C**: Different from the above approaches, STT [82] and LMANet [83] directly model the interactions between the target and reference frame features to exploit the temporal information.
.
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 4
|
Figure 20 demonstrates the latency breakdown of Hotline and three hybrid baselines. <|MaskedSetence|> <|MaskedSetence|> In contrast, for non-popular μ𝜇\muitalic_μ-batch, it hides the parameter gathering under popular μ𝜇\muitalic_μ-batch execution. In the case of the Taobao dataset, which is dominated by the neural network, deep learning execution surpasses the communication time.
Overall overhead is shown in Figure 20.
This overhead for Hotline includes online profiling and is minimal, primarily because online profiling done at the start of training is not hidden under GPU execution. Still, all subsequent profiling is hidden under GPU execution, significantly reducing overhead. Also, the lookup engine parallelizes the input accesses from EAL for embedding indices. This results in fast online profiling. In our evaluation, we transitioned to the access learning phase twice within a single epoch. <|MaskedSetence|>
|
**A**: Users can specify the frequency at which the learning phase is invoked.
.
**B**: Hotline eliminates the CPU-GPU communication time for popular μ𝜇\muitalic_μ-batch being completely executed on GPU.
**C**: The Criteo Kaggle and Terabyte datasets, which are more embedding and memory intensive, comprise high CPU–GPU communication time.
|
BAC
|
CBA
|
CBA
|
CBA
|
Selection 3
|
<|MaskedSetence|> Theorem 1. These are the so-called flat or constant cells (Definition 3.7), which map to nontransversal thresholds (cf. Lemma 2.1). <|MaskedSetence|> <|MaskedSetence|>
|
**A**:
Unsurprisingly, the key to understanding how the topology of sublevel sets changes as one varies the threshold are the PL analogues of points where the gradient of the function vanishes, cf.
**B**: For experts familiar with the ways ReLU neural network functions can degenerate, Lemma 3.12 should provide a clear explanation for why most ReLU neural network functions are not PL Morse.
.
**C**: Indeed, one should view the analogues of critical cells (both Morse and non-Morse) in the PL category as an appropriate subset of the flat cells.
|
ACB
|
ACB
|
BAC
|
ACB
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> Critical slowing down near the phase transition point may invalidate the applicability of this method. <|MaskedSetence|> TFQIM is a widely used model to study the phase transitions of one-dimensional spin chain and has been extensively studied analytically or experimentally such as in [40, 41, 42]. Therefore, TFQIM is a very suitable testbed to check the representing accuracy of neural networks and the robustness of machine learning methods. Specifically, we study the time evolutions of the energy, universal statistics and correlations of the topological defects formed in the TFQIM after quantum phase transition induced by a quench. In particular, we quench the strength of the transverse magnetic field to drive the system from a paramagnetic state into a ferromagnetic state, during which the topological defects, i.e. the kinks where the polarization of the spins changes their directions, will form due to the KZM. In the machine learning we introduce the Restricted Boltzmann Machine (RBM) as a representation of the quantum state for TFQIM. RBM is a kind of neural networks with two layers of neurons, i.e. visible layer and hidden layer (see Fig.1). In order to solve the ground state and the time evolution of the system, the stochastic reconfiguration (SR) method and time-dependent variational Monte Carlo (VMC) approach [43] are utilized, respectively. We find that time evolutions of the energy expectation value from the neural networks are perfectly consistent with the results reported in [40]. After the quench, the excited energy of the system are found to satisfy a power-law relation against the quench rate, which reveals the proportional relationship between the excitation energy and the kink numbers. Besides, the counting statistics of the kink numbers satisfy the Poisson binomial distributions introduced previously in [30, 44]. By computing the first three cumulants of the kink pair numbers, we find that they satisfy a universal power-law scalings to the quench rate consistent with the theoretical predictions. Additionally, we compute the kink-kink correlations at the end of the quench. The numerical data match the analytic formula presented in [41] very well. Therefore, our results show a very high accuracy of neural networks to investigate the critical dynamics of TFQIM..
|
**A**: The formation of topological defects and the KZM have been widely examined in various numerical simulations and experiments, including in quantum phase transition [29, 30], in quantum field theory with matrix product states [31], in AdS/CFT correspondence [32, 33, 34, 35], in programmable quantum simulators [36, 37] and in D-wave devices [38], to name some relevant references.
In [15] the machine learning methods was merely applied in the unitary dynamics without phase transitions.
**B**: In this paper, we extend the machine learning methods introduced in [15] to study the nonequilibrium process of critical dynamics in a one-dimensional transverse-field quantum Ising model (TFQIM).
**C**: Critical dynamics, i.e., the dynamics across the critical point of a phase transition is more complex and has richer phenomena [39].
|
ACB
|
CBA
|
ACB
|
ACB
|
Selection 1
|
<|MaskedSetence|> In complex cases, this can be more difficult to learn. <|MaskedSetence|> Note that the only data available of the first sequence is from the PDE itself, i.e., just the initial condition. We take the prediction at t=𝑡absentt=italic_t =d𝕋𝕋\mathbb{T}blackboard_T by using the model of the first sequence and use this as the initial condition to make a prediction in the next sequence, and so on. <|MaskedSetence|>
|
**A**: which can be shown in Figure 2..
**B**: Seq2seq strategy was proposed in [16], where the PINN learns to predict the solution at each time step, instead of all times.
**C**:
(3.5)
The original PINN approach trains the NN model to predict the entire space-time at once.
|
ACB
|
CBA
|
CBA
|
CBA
|
Selection 3
|
6.6.1 Proof of Concept Experiments with Various Underlying Distributions
We used Gaussian as the underlying distribution of our synthetic datasets. <|MaskedSetence|> To do so, we follow the same procedure outlined in the construction of synthetic datasets in § 6.1.2, however, instead of generating a sample following a Gaussian distribution, we opt to utilize alternative distributions such as Uniform, Exponential, and Logistic. <|MaskedSetence|> In summary, aligned with our previous results, there is a consistent correlation between RU values and a model’s potential to predict correctly, with models exhibiting more error as RU values increase.
For the Uniform distribution (Figures 17(a)-17(e)), we expected the entire query space should be equally represented by the training data and hence, the lack of representation scores to be universally low. As a result, the strongRU scores remain low. weakRU in this case, mostly reflects the impact of uncertainty. <|MaskedSetence|>
|
**A**: The results are illustrated in Figure 18.
**B**: In this experiment, we study whether the underlying distribution of the data would affect the capacity of the RU measures in revealing unreliability.
**C**: Consistent with our previous results, one can see the high correlation between weakRU and model performance in Figure 17(e).
.
|
BAC
|
BAC
|
BAC
|
BAC
|
Selection 4
|
2.2. <|MaskedSetence|> Taking a joint approach focused on online safety, for instance, Hashish et al. (Hashish
et al., 2014) proposed an app called “We-Choose,” which allowed parents and children to work together in selecting which apps were appropriate for use. <|MaskedSetence|> <|MaskedSetence|> (Charalambous et al., 2020) proposed a “Cybersafety Family Advice Suite” (CFAS), where youth had a say in what online activities would be monitored by their parents to alert them of suspicious activity. Further, Ghosh et al. (Ghosh.
|
**A**: In a more recent study, Charalambous et al.
**B**: Co-managing Online Safety and Privacy as a Family
Several studies have explained how adolescent online safety apps could benefit from allowing parents and teens to work more collaboratively on protecting teens online.
**C**: They found that this approach received higher levels of buy-in from children.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 1
|
The scatter plot of the cell towers is shown in
Figure 12, followed by the persistence diagrams of the two filtrations in Figure 13. Even without the aid of the confidence bands, one point is conspicuously far away from the diagonal in the persistence diagram of each filtration. The RDAD filtration picks up 2 more significant loops.
The two filtrations pick up completely different homology classes. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: The class picked up by the distance-to-measure filtration is near Steens Mountain Wilderness in Oregan.
**B**: The 3 classes picked up by the RDAD filtration are Lake Michigan; Dallas, Texas; and the Texan region surrounded by Houston, Austin and San Antonio.
**C**: The last two regions have considerable population, and the sparsity of cellular towers there is likely due to the dataset’s incompleteness..
|
ABC
|
BAC
|
ABC
|
ABC
|
Selection 3
|
Datasets and Metrics
In our experiments, we use two AU benchmark datasets, BP4D (Zhang et al. 2014) and DISFA (Mavadati et al. <|MaskedSetence|> BP4D involves 41 young adults, including 23 female and 18 male adults. Each subject is asked to finish 8 tasks, and 324 videos containing around 140,000 images are captured. <|MaskedSetence|> <|MaskedSetence|> Frames with AU intensity labels greater than 1 are selected as positive samples..
|
**A**: 2013).
**B**: Each frame is annotated manually by a FACS coder with AU intensity labels within a scale of 0 to 5 for each frame.
**C**: Each frame is annotated with binary AU occurrence labels by two FACS coders independently.
DISFA involves 26 adults, and to record their spontaneous facial behaviors, they are asked to watch specific videos.
|
ABC
|
ACB
|
ACB
|
ACB
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> In Ethereum, when a node receives a new transaction, it selects some of its neighbor nodes to forward the new transaction and selects the remaining neighbor nodes to forward the transaction hash [24]. If its neighbor nodes receive the transaction hash but the transaction is not in their local transaction pools, the neighbor nodes then request the transaction by sending the transaction hash to the node from which they received the transaction hash.
.
|
**A**: We measured the matched-blockbody probability in Ethereum.
**B**: The experimental setup is similar to [12].
**C**: 3.1.3 Measurement on matched-blockbody probability
As shown in our analysis, the matched-blockbody probability is important for our BBP and other compact-block-like protocols to reduce block transmission time.
|
ACB
|
CAB
|
CAB
|
CAB
|
Selection 2
|
<|MaskedSetence|> In detail, Azizzadenesheli et al. (2016); Guo et al. (2016); Xiong et al. (2021) establish sample complexity guarantees for searching the optimal policy in POMDPs whose models are identifiable and can be estimated by spectral methods. However, Azizzadenesheli et al. (2016) and Guo et al. (2016) add extra assumptions such that efficient exploration of the POMDP can always be achieved by running arbitrary policies. In contrast, the upper bound confidence (UCB) method is used in Xiong et al. (2021) for adaptive exploration. <|MaskedSetence|> The more related work is Jin et al. (2020a), which considers undercomplete POMDPs, in other words, the observations are more than the latent states. Their proposed algorithm can attain the optimal policy without estimating the exact model, but an observable component (Jaeger, 2000; Hsu et al., 2012), which is the same for our algorithm design, while only applies to tabular POMDPs.
In a broader context of reinforcement learning with partial observability, our work is related to several recent works on POMDPs with special structures. For example, Kwon et al. (2021) considers latent POMDPs, where each process has only one latent state, and the proposed algorithm efficiently infers the latent state using a short trajectory. <|MaskedSetence|> (2021) considers POMDPs having tree-structured states with their positions in certain partitions being the observations. Compared with general POMDPs, these specially structures reduce the complexity of finding the optimal actions, and the corresponding algorithms use techniques closer to those for MDPs. Also, the aforementioned literature only consider tabular POMDPs..
|
**A**: Kozuno et al.
**B**: However, they require strictly positive state transition and observation emission kernels to ensure fast convergence to the stationary distribution.
**C**:
Our work is related to a line of recent work on the sample efficiency of reinforcement learning for POMDPs.
|
CBA
|
CBA
|
CBA
|
ABC
|
Selection 3
|
4 Discussion
We conducted this systematic literature review based on a sample of 24 out of 678 research papers deriving from Scopus, IEEEXplore, and ACM DL databases. Exciting insights and trends emerged from our analysis of these papers. In recent years, we observed an increasing interest in AI-based auto- mated speech therapy tools. <|MaskedSetence|> <|MaskedSetence|> This data suggests that a significant amount of research in this field is ”one-off” by authors. Most authors explored the research area with one idea and did not develop or evaluate it further. We found that ”automatic speech recognition” is the most emphasized keyword by the authors. <|MaskedSetence|> The majority of studies were from European, North American, and Asian countries, and the most prevalent language targeted by the included studies was English. This finding is in line with the fact that English is the most widely adopted language for ASR technologies (Benzeghiba \BOthers., \APACyear2007). However, we can also observe that researchers have attempted to build AI-based automated speech therapy tools in other languages.
.
|
**A**: This finding is consistent with the notion that ASR is the core of AI-based automated speech therapy tools.
**B**: Surprisingly, 79 authors (86.81%) out of 91 unique authors have only one work on AI-based automated speech therapy in the last 15 years.
**C**: This growing interest can be due to the recent advancement in ASR technology and its improved accuracy.
|
CBA
|
ACB
|
CBA
|
CBA
|
Selection 1
|
Our outlier detection is a simple process inspired by compression ratio. Intuitively, any data point that belongs to an underlying community should have large compression ratios
with many points from the same community, whereas it will have a lower compression ratio w.r.t inter-community points. On the other hand, outliers will have more similar compression ratios with all the other points. <|MaskedSetence|> <|MaskedSetence|> We also compare our algorithm with popular algorithms such as the Local Outlier Factor (LOF) method [BKNS00] and KNN-dist [RRS00] as well as more recent methods such as Isolation forest [LTZ08] and ECOD [LZH+22] through both simulations and experiments on real-world data. <|MaskedSetence|>
|
**A**: Thus our algorithm simply removes points with low variance of compression.
We analyze this simple algorithm in an extension of the standard random vector mixture model.
**B**: This difference can be captured by the variance of the list of compression ratios between one point and all of the other points, with outliers having a lower variance of compression.
**C**: We show that this simple algorithm is very competitive with those popular outlier detection tools..
|
BAC
|
BAC
|
BAC
|
CAB
|
Selection 2
|
Facility location games with entrance fees.
In all the above models, the cost of an agent is measured by her distance to the closest facility. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Another example is that the environment around the facility may have an impact on the entrance fee. For example, the entrance fee of a facility in a popular scenic spot can be higher than the entrance fee of the same facility in a desolate place.
Various non-geographical settings can be accommodated within the location-dependent entrance fee model. Two illustrative examples include:.
|
**A**: An immediate example is building a facility downtown would be more expensive than in the suburbs.
The entrance fee of the facility is decided by the building cost and thus also decided by the location where the facility is built.
**B**: This cost can be considered as the travel fee.
**C**: In many real-life scenarios, except for the travel fee, the agent may also need to pay the facility a service or entrance fee, such as tickets for swimming pools and museums.
The entrance fees may differ for facilities in different locations.
|
BCA
|
BCA
|
CBA
|
BCA
|
Selection 2
|
Among the available approaches, the concept of control invariant set is one of the most exploited historically, since it ensures the existence of some feedback law able to steer the closed-loop trajectories of the uncertain system within a prescribed state set 25, 6, 8, 37. <|MaskedSetence|> With a specific focus on discrete-time polytopic systems, an admissible control policy that actually makes a polyhedral CLF a suitable Lyapunov candidate for the closed-loop system is typically synthesized in two ways: through a variable structure 25, 46, 47, or a (minimal) selection-based controller 3. We will also refer to these policies as traditional stabilizing controllers for linear uncertain systems.
Once fixed feasible control inputs at the vertices of the invariant set have been computed, a variable structure controller either takes a convex combination of those values by exploiting the vertex reconstruction of any state belonging to such a set, or coincides with a purely linear gain stemming from a triangulation, i.e., a simplicial partition 16, of the underlying set. These methods therefore require one to solve a linear program (LP) online or to generate a lookup table to identify the region in which the current state resides. If the simplicial partition-based implementation is considered, then one has also to account for the complexity of the resulting invariant set, which is typically high 6, 8, 49, 10, 2, 9. <|MaskedSetence|> <|MaskedSetence|> By requiring the online resolution of a nonlinear optimization problem, parametric in the current measured state, this method directly enforces a certain degree of contraction possessed by the CLF at every control step. While solving a numerical optimization problem online provides flexibility and performance guarantees, the real-time computational efforts required complicate its application in polytopic linear systems characterized by high sampling rates..
|
**A**: This is traditionally achieved by associating a control Lyapunov function (CLF) with the invariant set design, which for polytopic systems has been proven to be universal, namely the stabilization of the linear uncertain system and the existence of a polyhedral CLF can be used interchangeably 7.
**B**: These methods can therefore require significant memory to store the vectors and/or matrices describing every simplicial partition and associated linear control gain.
**C**: As a common drawback affecting both the implementations, however, fixing the input values at the vertices may result in poor control performance for the stabilization task.
A more sophisticated control method coincides with the selection-based policy.
|
ABC
|
ABC
|
CBA
|
ABC
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> However, for extrapolation to unseen points in time, the overfitted model of [22] performs significantly worse, indicating that the physical model is poorly identified from a single video. <|MaskedSetence|> The fact that we achieve comparable results while using significantly less data highlights the advantage of combining the explicit dynamics model with the implicit representation for the objects. Note that we chose sequence 6 in particular, since it yielded the best visual results for the baseline. Table 1 shows a quantitative analysis of all 10 test sequences, highlighting again the advantages of our method in the setting of a single sequence as well as the competitiveness against the usage of considerably more data. More results can be found in the appendix.
Nonlinear damped pendulum. .
|
**A**:
Figure 3 shows a qualitative comparison of our results to the method of [22] trained in the two settings explained above.
**B**: We observe that for this sequence all approaches yield reasonable results for the reconstruction of the training frames.
**C**: While both, the baseline trained on the full dataset and our method are able to identify the parameters with high accuracy, our methods achieves an even lower error, which leads to a more precise prediction of the future frames.
|
ABC
|
ABC
|
ABC
|
BCA
|
Selection 1
|
Towards this goal, the main contribution of this letter is a novel resource-efficient QCN framework, dubbed the framework. <|MaskedSetence|> First, it utilizes high-dimensional quantum information and to extract underlying structures of classical data, capitalizing on its efficiency in identifying atypical data patterns and surpassing classical machine learning speeds [10]. Second, the QSC framework delves into quantum semantic representations, highlighting quantum mechanics’ fundamental role in vector modeling and linear algebraic semantics [11, 12]. Thus, rather than merely transmitting raw data in compressed quantum states, the QSC framework intelligently extracts semantic information from the data. It then transmits only the essential quantum semantic representations over quantum channels, thereby resulting in more resource-efficient QCNs while maintaining high accuracy. Specifically, we provide a systematic approach for assessing and optimizing the minimality of quantum communication resources needed (e.g., entangled quantum states), and the accuracy of those resources in terms of quantum communication and semantic fidelity, showcasing the tradeoffs that exist. <|MaskedSetence|> The proposed framework provides a promising direction for researchers and engineers to explore the potential of QML and in reducing the resources required in QCNs. <|MaskedSetence|> Subsequent quantum clustering extracts useful semantic concepts, which are then mapped into efficient semantic representations for transmission via entangled qudits over quantum channels. At the receiver end, the fidelity and accuracy of the quantum mapping are verified, followed by the reconstruction of semantic representations and derivation of semantic concepts through quantum measurements.
Figure 1: Illustrative figure showcasing the proposed QSC framework..
|
**A**: Simulation results validates that the QSC framework results in minimal quantum communication resources, saving 50-75% of the resources compared to semantic-agnostic QCNs, while achieving higher quantum semantic fidelity.
**B**: Figure 1 illustrates the proposed QSC framework, which starts at the transmitter, where raw data is embedded into qudits in high-dimensional Hilbert spaces.
**C**: This framework draws upon recent advancements in two key quantum information science areas.
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 3
|
The transmissions follow and satisfy the rules and constraints below. <|MaskedSetence|> For a backhaul transmission, if an IAB-node working in HD mode is selected as a receiver of its parent node in a specific slot, it cannot transmit in the same slot. For an access transmission, a UE can be selected as a receiver if a beam points to its located sector. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: If more than one UE is present in the sector, one of them is randomly selected as the intended receiver.
Due to hardware limitations, a UE can receive from at most one IAB-node / IAB-donor in a slot, therefore, a collision occurs if a UE is selected as a receiver from more than one panel (i.e., more than one IAB-node) in the same time slot.
**B**: Whenever it occurs, no bit can be delivered to the UE.
For both backhaul and access transmissions, if the amount of data bits cached in a buffer, rather than the link capacity, is the limiting factor, the outgoing links simultaneously activated equally share buffered bits to pursue the fairness..
**C**: A parent IAB-node – more specifically, its RL agent(s) – will have to choose between (1) sending data bits to a child IAB-node via a backhaul link to refill its buffer, and (2) directly transmitting to a UE via an access link to myopically improve its throughput.
|
CAB
|
BAC
|
CAB
|
CAB
|
Selection 4
|
Situating our work within contract theory, we study an adverse selection model with a common value structure in the principal’s utility function. <|MaskedSetence|> This makes statistical inference challenging. <|MaskedSetence|> Importantly, inference must be done in a way that takes into account the incentives of the strategic agents. <|MaskedSetence|> That work establishes a minimax protocol similar to our work for the case where the principal’s action space involves setting the p𝑝pitalic_p-value threshold for approval, and it also analyzes the incentive structure of Phase III clinical trials for drug approval. See also Viviano
et al. (2021), who study multiple hypothesis testing with incentives.
.
|
**A**: Our key departure from the usual adverse selection setup is that we do not assume that the principal has a prior distribution about the agents’ hidden types.
**B**: Our focus is on designing contracts that allow the principal to carry out statistical inference in order to properly assess the hidden types of the agents.
**C**: Our work most closely relates to that of Tetenov (2016), who consider setting the type-I error level of a hypothesis test to account for an agent’s payoffs.
|
ABC
|
ABC
|
ABC
|
ACB
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> Notably, registration through PPIR(MPC) yields negligible differences compared to Clear in terms of the number of iterations, intensity, and displacement.
In contrast, registering with PPIR(FHE) is not feasible when considering entire images due to computational complexity.
Nevertheless, Supplementary Figure A5 shows that neither MPC nor FHE decreases the overall quality of the affine registered images. <|MaskedSetence|> Table 2 (Efficiency metrics) shows that PPIR(MPC) performed on full images requires higher computation time and required communication bandwidth compared to PPIR(MPC)+URS/GMS. These numbers improve sensibly when using URS or GMS (by factors 10×10\times10 × and 20×20\times20 × for time and bandwidth, respectively)..
|
**A**:
Whole body PET data: affine registration (SSD).
**B**: Table 2 compares Clear, PPIR(MPC), PPIR(FHE)-v1 and v2, showcasing metrics resulting from the affine transformation of whole-body PET images.
**C**: A comprehensive assessment of the registration results is available in the Appendix.
|
ABC
|
ABC
|
ABC
|
CAB
|
Selection 1
|
These existing approaches partially address the challenges of cloud-to-edge black-box model distillation, but none of them take into account the privacy leak of user data when sending original local images to the cloud.
Figure 2:
The overall framework of MEKD. <|MaskedSetence|> Upper right: the process of deprivatization. <|MaskedSetence|> Lower right: the process of distillation with the frozen generator. <|MaskedSetence|> The student model is distilled by reducing the logit-level and image-level discrepancy..
|
**A**: Lower left: two architectures of GAN-based KD.
**B**: The synthetic privacy-free images are query samples sent to the teacher model through the APIs of cloud servers.
**C**: GAN is used to synthetic high-response images to the teacher model within the distribution of data in edge devices.
|
ACB
|
ACB
|
BAC
|
ACB
|
Selection 2
|
<|MaskedSetence|> Namely, if one trains FNO on an insufficiently fine grid, activation functions introduce distortions that FNO will learn to mitigate. <|MaskedSetence|> The precise discrepancy is hard to analyze but one can expect that for operators with smoother output aliasing will be milder. This hypothesis is confirmed by our experiments. Indeed, the right graph in Figure 2a shows that for the integration operator, which performs smoothing, aliasing leads to an approximately two-fold error increase, while for the differentiation operator the increase is five-fold. For both examples, FNO still produces good approximations, but one should be cautious using FNO for more complex problems, because in these applications FNO tends to be less accurate [Pat+22], [Gra+22].
Our solution to the problem of aliasing errors is to decouple interpolation from the mapping between functional spaces. It is described in Section 4. However, it is possible to decrease aliasing error even when FNO is used for interpolation. One simply needs to train (or fine-tune) on a sufficiently fine grid. <|MaskedSetence|>
|
**A**: The decrease of aliasing error in this scenario is illustrated in the left plot of Figure 2a..
**B**:
Theorem 1 predicts that in certain situations FNO has a systematic bias.
**C**: When the grid is sufficiently refined, aliasing errors disappear, but since FNO was trained to mitigate them, it predicts output functions that differ from targets it was trained on.
|
BCA
|
ACB
|
BCA
|
BCA
|
Selection 4
|
<|MaskedSetence|> patch size h×wℎ𝑤h\times witalic_h × italic_w and groups g𝑔gitalic_g.
In Table 9, we found that generally, smaller patch size is advantageous to patch-group distillation and overlarge patch size, however, may be unfavourable since it approaches the original feature.
Regarding the groups, it merges the patches as a group for joint distillation. In the experiment shown in Table 10, the patch size is set to 8×8888\times 88 × 8, which divides the original feature map into 128/8*128/8=25612881288256128/8*128/8=256128 / 8 * 128 / 8 = 256 patches. <|MaskedSetence|> When only one group is used, it indicates that all of the patches will be distilled jointly. On the contrary, using 256 groups means each patch is distilled individually. <|MaskedSetence|>
|
**A**:
Validating the patch-group distillation.
Next we analyze the two key factors of patch-group distillation, i.e.
**B**: In this example, we found that 4 patches as a group can reach the best performance..
**C**: There are two extreme situations.
|
BCA
|
ACB
|
ACB
|
ACB
|
Selection 3
|
<|MaskedSetence|> We partition the total data into three groups based on the number of cylinders as follows: In the first group we place all the cars with 4 or fewer cylinders, in the second group are cars with 5 or 6 cylinders, and in the third group we put the cars with more than 6 cylinders. In Figure 8 the first group is colored in red, the second group is colored in blue and the third group is colored in orange.
We further partition the dataset into four groups based on the weights according to 25, 50 and 75 quantiles. <|MaskedSetence|> <|MaskedSetence|> The second perspective groups together datapoints with the same shapes, i.e., cars with similar weights are grouped together; see the third subfigure of Figure 8..
|
**A**: To show the clusters in embedded dataset, we use the cylinders and the weights.
**B**: In Figure 8 the datapoints corresponding to the first group are shown in diamond shapes, the second group in crosses, the third group in circles and the fourth group in triangles.
Figure 8 shows that ENS-t-SNE was able to find an embedding of the dataset in 3D separating the data into several clusters.
**C**: The first perspective groups together datapoints with the same colors, i.e., cars with similar numbers of cylinders are grouped together; see the second subfigure of Figure 8.
|
ABC
|
BAC
|
ABC
|
ABC
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> On top of the low-rank transition, we define a Bellman operator, which performs a forward update for any finite-length trajectory. <|MaskedSetence|> The key to ETC is balancing exploitation and exploration along the representation learning process. To this end, we construct a confidence set of embeddings upon identifying and estimating the Bellman operator, which further allows efficient exploration via optimistic planning. It is worth mentioning that such a unified framework allows a variety of estimators (including maximum likelihood estimators and generative adversarial networks).
.
|
**A**: To this end, we identify a class of POMDPs with a low-rank structure on the state transition kernel (but not on the observation emission kernel), which allows prediction and control in a sample-efficient manner.
**B**: More specifically, the transition admits a low-rank factorization into two unknown features, whose dimension is the rank.
**C**: The Bellman operator allows us to further factorize the history across multiple steps to obtain its embedding, which assembles the per-step feature.
By integrating the two levels of representation learning, that is, (i) feature learning at each step and (ii) embedding learning across multiple steps, we propose a sample-efficient algorithm, namely Embed to Control (ETC), for POMDPs with infinite observation and state spaces.
|
ABC
|
ABC
|
ABC
|
BCA
|
Selection 3
|
Here and throughout the paper we use L2(D)superscript𝐿2𝐷L^{2}(D)italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_D ) and H1(D)superscript𝐻1𝐷H^{1}(D)italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( italic_D ) to denote the standard Lebesgue and Sobolev spaces of functions on D𝐷Ditalic_D. Moreover, H01(D)superscriptsubscript𝐻01𝐷H_{0}^{1}(D)italic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( italic_D ) is the subspace of functions in H1(D)superscript𝐻1𝐷H^{1}(D)italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( italic_D ) with vanishing trace on ∂D𝐷\partial D∂ italic_D, its dual space is indicated by H−1(D)superscript𝐻1𝐷H^{-1}(D)italic_H start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_D ). <|MaskedSetence|> , . ) start_POSTSUBSCRIPT 0 , italic_D end_POSTSUBSCRIPT, the norms in L2(D)superscript𝐿2𝐷L^{2}(D)italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_D ), H1(D)superscript𝐻1𝐷H^{1}(D)italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( italic_D ), H−1(D)superscript𝐻1𝐷H^{-1}(D)italic_H start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_D ) are denoted by ∥.∥0,D\|.\|_{0,D}∥ . ∥ start_POSTSUBSCRIPT 0 , italic_D end_POSTSUBSCRIPT, ∥.∥1,D\|.\|_{1,D}∥ . <|MaskedSetence|> ∥ start_POSTSUBSCRIPT - 1 , italic_D end_POSTSUBSCRIPT, respectively. <|MaskedSetence|> For D=Ω𝐷ΩD=\Omegaitalic_D = roman_Ω we drop the subscript D𝐷Ditalic_D.
The Euclidean vector norm in ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT is indicated by ∥.∥\|.\|∥ . ∥ (without any subscript).
.
|
**A**: The same symbols are used for inner product and norms
in the vector-valued versions of the spaces.
**B**: ∥ start_POSTSUBSCRIPT 1 , italic_D end_POSTSUBSCRIPT,∥.∥−1,D\|.\|_{-1,D}∥ .
**C**: The inner product in L2(D)superscript𝐿2𝐷L^{2}(D)italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_D ) is denoted by (.,.)0,D(.,.)_{0,D}( .
|
CBA
|
CBA
|
BAC
|
CBA
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Other token-mixing architectures [50] have also been proposed that use standard neural components, such as convolutional layers and multi-layer perceptrons (MLPs, i.e. 1×1111\times 11 × 1 convolutional filters) for mixing visual tokens. MLP-mixer uses two MLP layers applied first to an image patch sequence and then to the channel dimension to mix tokens [15]. ConvMixer uses standard convolutions along spatial dimensions and depth-wise convolutions across channels to mix tokens with fewer computations than attention mechanisms [17]. PoolFormer uses pooling as a token-mixing operation [50]. These token mixing models perform efficiently compared to transformers without compromising generalization. However, these models do not use image priors well.
.
|
**A**: However, these models suffer from similar inflexibility as the ViTs.
Token mixers that replace the self-attention in transformers with fixed token mixing mechanisms, such as the Fourier transform [47, 48], achieves comparable generalization with lower computational requirements on NLP tasks [49].
**B**: The quadratic complexity with respect to the sequence length (# of tokens) of transformers has also led to the search for other linear approximations of self-attention for efficiently mixing tokens [43].
**C**: Hybrid models aim to reduce the computational requirements of vision transformers by incorporating image-specific inductive biases in the form of additional architectural elements including distillation [42], convolutional embeddings [43, 44], convolutional tokens [45], and encoding overlapping patches [46].
|
ACB
|
CBA
|
CBA
|
CBA
|
Selection 4
|
Identifier-definition extraction limitations. Methods considering the specific link between identifiers and their definitions have split off into at least three recent tasks: identifier-definition extraction schubotz2017evaluating; alexeeva2020mathalign, variable typing stathopoulos2018variable, and notation auto-suggestion jo2021modeling. A lack of consensus on the framing of the task and data prevents a direct comparison between methods. schubotz2017evaluating even advise against using their gold standard dataset for training due to certain extractions being too difficult for automated systems, among other reasons. <|MaskedSetence|> schubotz2017evaluating also suggest that future research should focus on recall as current methods only extract exact definitions for 1/3 of identifiers, and suggest use of multilingual semantic role labelling akbik2016multilingual and logical deduction schubotz2016getting to improve on their approach. <|MaskedSetence|> <|MaskedSetence|> They propose future methods should combine grammar with a learning framework, extend rule sets to account for coordinate constructions, and create well-annotated training data using tools such as PDFAlign. Their proposed evaluation data MathAlign-Eval may allow for easier comparison of future work, similar to data proposed in cobbe2021training. The need for math-specific benchmarks is echoed in greiner2020math.
.
|
**A**: We assume the issues with superscript identifiers (such as Einstein notation etc.) from schubotz2016getting carry over into schubotz2017evaluating, however the explicit approach proposed by alexeeva2020mathalign attempts to account for such notation (known as wildcards in formula retrieval).
**B**: The latter is partially tackled by alexeeva2020mathalign through Odin grammar valenzuela2016odin; sharp2019eidos.
**C**: Similar reasoning led cobbe2021training to propose a novel MWP dataset due to the difficulty level of previous datasets.
|
CBA
|
CBA
|
CBA
|
BCA
|
Selection 2
|
<|MaskedSetence|> The block memberships are not known a priori, they are recovered a posteriori by the inference algorithm.
In social (resp. <|MaskedSetence|> species) with the same block membership play the same social/ecological role in its system (Boorman and White,, 1976; Luczkovich et al.,, 2003). In food webs, species playing the same ecological role are said to be ecologically equivalent (see Cirtwill et al.,, 2018, for a review of species role concepts in food webs). <|MaskedSetence|> Two species are said to be regularly equivalent if they feed on equivalent species and are preyed on by equivalent species. This notion of regular equivalence is a relaxation of structural equivalence which imposes that structurally equivalent species have exactly the same trophic relations in the food web.
In practice, Luczkovich et al., (2003) find that species are grouped into blocks by trophic level and some separation might occur based on trophic chains..
|
**A**: (Mariadassou et al.,, 2010).
**B**: ecological) networks, individuals (resp.
**C**: When analysing the roles in food webs, Luczkovich et al., (2003) use the notion of regular equivalence to define trophic role.
|
CBA
|
ABC
|
ABC
|
ABC
|
Selection 2
|
This setting arises naturally in several real world applications
where obtaining negative samples is either too resource intensive or improbable. For example, consider personalized recommendation systems (Naumov et al., , 2019) where the training data is typically extracted from user feedback. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> if a user browses a product frequently or watched a movie then the user-item pair is labeled positive) (Chen et al., , 2021). This setup has also been motivated by applications like gene and protein identification (Elkan and Noto, , 2008), anomaly detection (Blanchard et al., , 2010) and matrix completion (Hsieh et al., , 2015).
While self supervised contrastive losses e.g. InfoNCE (Gutmann and Hyvärinen, , 2010) have gained remarkable success in unsupervised representation learning; recent works (He et al., , 2020; Kolesnikov et al., , 2019) have pointed out that purely self-supervised approaches often produce inferior visual representation compared to fully supervised approaches. To address this issue, researchers (Khosla et al., , 2020; Assran et al., , 2020; Graf et al., , 2021) proposed supervised variants of contrastive loss (SCL) that, by leveraging available supervision, is able to obtain equally (or even more) discriminative representations as fully supervised cross entropy objective..
|
**A**: browsing history) (Kelly and Teevan, , 2003) which usually indicates user’s positive preference (e.g.
**B**: user ratings) is hard to obtain, most practical recommendation systems rely on implicit user feedback (e.g.
**C**: Since explicit user feedback (e.g.
|
CBA
|
ABC
|
CBA
|
CBA
|
Selection 1
|
In Carlen et al. <|MaskedSetence|> (2017), a node’s membership vectors are held fixed across layers, but a new affinity matrix is fit for each layer. <|MaskedSetence|> Conversely, in Tarrés-Deulofeu et al. (2019) a Tucker decomposition accounting for layer community structure is fit with the aim to predict types of links in a multilayer network. <|MaskedSetence|> Although layer community structure is addressed in Tarrés-Deulofeu et al. (2019), the number of node-communities is always fixed to equal the number of layer-communities: a missed opportunity to examine layer interdependence by examining the optimal number of layer-communities.
In Wang and Zeng (2019), the authors propose using a Tucker decomposition as a multilayer SBM, but limit their factor matrices to only take on binary values. Thus, the extent to which layer dependence is addressed is limited to the binary clustering of layers and is more similar to the strata work of Stanley et al. (2016). Furthermore, the core tensor is not constrained to be nonnegative, and the proposed algorithm focuses on minimizing the Frobenius norm of the difference between the tensor and the approximation given by the Tucker decomposition.
.
|
**A**: To do so, a new core tensor is fit for each type of link.
**B**: A similar model is proposed in Paul and Chen (2016) but with node membership vectors constrained to take on binary values and with a Bernoulli distribution assumption instead of Poisson.
**C**: (2022) and De Bacco et al.
|
CBA
|
CBA
|
CBA
|
ABC
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> Some output images are shown in Figure 7. As expected, MLP fails to learn the frequency pattern across all three categories. <|MaskedSetence|> Although ChevNet shows comparable results in the high-pass task, it is achieved with 41,537 trainable parameters while our method only requires 2,050 parameters. Lastly, our method is the only one that can learn and accurately resemble the band-pass image, demonstrating a better flexibility in learning frequency patterns.
.
|
**A**: Our method consistently outperforms other models.
**B**: GCN, GAT and GIN are able to learn the low-pass pattern but failed in learning the band and high-frequency patterns.
**C**: Table 2: Sum of squared errors
Table 2 shows the square loss of our method along MLP and 2 GNNs.
|
CAB
|
BAC
|
CAB
|
CAB
|
Selection 3
|
Both of these approaches could be adapted to suggest a measure of unfairness, by replacing the hard constraints on the absolute differences or on the ratios with a sum of the values of each of the constrained quantities, and using this sum of as a measure of unfairness. <|MaskedSetence|> (2017a).
However, applying this adaptation to either type of constraint would lead to undesirable results. If absolute-difference constraints were used, then the value of the unfairness measure in unbalanced cases, in which some labels are very rare, would be meaningless, as it would be close to zero for all reasonable classifiers. As we show in our experiments, rare labels can be encountered, for instance, when dealing with a classifier for rare diseases.
If ratio constraints were used, then the unfairness measure would become unbounded, leading to meaninglessly high values when there are significant differences between some of the conditional distributions. This issue has also been noted by Calmon et al. <|MaskedSetence|> <|MaskedSetence|> Our unfairness measure overcomes these issues because it is based on assigning a clear meaning to the measure.
.
|
**A**: For non-binary fairness attributes and possibly multiclass labels, this may require summing over all pairwise constraints, or comparing to an unconditional distribution, as proposed in Calmon et al.
**B**: As mentioned above, these issues can be attributed to the fact that neither of these approaches assigns a clear model-based meaning to the resulting measure.
**C**: (2017a), who warn against using the ratio test when one of the rates is very small.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 4
|
<|MaskedSetence|> There are indefinitely many ranges to probe, whereas most return the same motif. <|MaskedSetence|> While both have the same radius, the right one has higher similarity. We use the maximal pairwise distance, called extent, to find the TOP motif among all k-NN queries.
Finally (Right): We introduce elbow plots for a guided extraction of meaningful motif set sizes. <|MaskedSetence|> Overall, we will show that these improvements reduce the runtime and human efforts to find motif sets considerably..
|
**A**: Here, rapid changes in similarity when increasing k represent a characteristic change from one motif to another.
**B**:
We illustrate the three main contributions in Figure 1.
Firstly (Left): SotA is based on range queries, with radius r𝑟ritalic_r as input.
**C**: A simple, yet overseen improvement to only get distinct motifs, is to use k-NN queries.
Secondly (Center): Consider two motif sets with 3 subsequences each.
|
BCA
|
BCA
|
ACB
|
BCA
|
Selection 1
|
Although regularization is an effective method to deal with linear regression problems, it brings essential difficulties to the convergence analysis of the algorithm. <|MaskedSetence|> Compared with the case with i.i.d. <|MaskedSetence|> All these challenges make it difficult to analyze the convergence and performance of the algorithm, and the methods in the existing literature are no longer applicable.
For example, the methods in [27]-[30] and [34] are applicable for the case that the graphs, regression matrices and noises are i.i.d. and mutually independent and it is required that the expectations of the regression matrices be known in [28]-[29].
Liu et al. <|MaskedSetence|> In addition, they require that the graphs be strongly connected and the observation vectors and the noises be i.i.d. and bounded..
|
**A**: [25] studied the decentralized regularized gossip gradient descent algorithm for linear regression models, where the method is applicable for the case that only two nodes exchange information at each instant.
**B**: Compared with the non-regularized decentralized linear regression algorithm, the estimation error equation of this algorithm contains a non-martingale difference term with the regularization parameter, which cannot be directly analyzed by using martingale convergence theorem as [39].
We no longer require that the sequences of regression matrices and graphs satisfy special statistical properties, such as mutual independence, spatio-temporal independence and stationarity.
**C**: data, dependent observations and data contain less information and therefore lead to more unstable learning errors as well as the performance degradation [40].
Besides, we consider both additive and multiplicative communication noises in the process of the information exchange among nodes.
|
BCA
|
ABC
|
BCA
|
BCA
|
Selection 4
|
<|MaskedSetence|> Massimiliano (Max) Lupo Pasini obtained his Bachelor of Science and Master of Science in Mathematical Engineering at the Politecnico di Milano in Milan, Italy. The focus of his undergraduate and master studies was statistics and discretization techniques and reduction order models for partial differential equations. He obtained his PhD in Applied Mathematics at Emory University in Atlanta (GA) in May 2018. The main topic of his doctorate work was the development of efficient and resilient linear solvers for upcoming computing architectures moving towards exascale. <|MaskedSetence|> Since 2020 Max has been a Data Scientist in the Scalable Algorithms and Coupled Physics Group in the Advanced Computing Methods for Engineered Systems Section of the Computational Sciences and Engineering Division at ORNL. <|MaskedSetence|> He is currently the lead of the Artificial Intelligence for Scientific Discovery thrust of the ORNL Artificial Intelligence Initiative..
|
**A**: Max’s research focuses on the development of surrogate models for material sciences, scalable hyper parameter optimization techniques for deep learning models, and acceleration of computational methods for physics applications.
**B**:
Author Biography
{biography}
Massimiliano Lupo Pasini.
**C**: Upon graduation, Max joined the Oak Ridge National Laboratory (ORNL) as a Postdoctoral Researcher Associate in the Scientific Computing Group at the National Center for Computational Sciences (NCCS).
|
BCA
|
ACB
|
BCA
|
BCA
|
Selection 4
|
For all the conducted experiments we employ a BART-large architecture [4], which is a transformer-based model with a bidirectional encoder and an auto-regressive decoder. <|MaskedSetence|> We use the established parameters for the BART-large architecture and the implementation provided by Hugging Face [33]. <|MaskedSetence|> We use PyTorch version 1.10 and Hugging Face version 4.11.0. All the models were trained using GPUs available in Google Colab, and in specific the NVIDIA T4 Tensor 16 GB GPU. <|MaskedSetence|>
|
**A**: All the models are fine-tuned for 100,000 steps with a learning rate of 3×10-5 and batch size 4, with early stopping on the validation set.
**B**: The code and the compiled dataset are publicly available111https://github.com/tatianapassali/topic-controllable-summarization.
.
**C**: BART-large consists of 12 layers for both encoder and decoder and 406M parameters.
|
CAB
|
ABC
|
CAB
|
CAB
|
Selection 4
|
<|MaskedSetence|> These qubits are typically arranged in planar 2D layouts, while 3D configurations are feasible but more challenging to engineer. Notably, neutral-atom systems enable a technique called qubit shuttling which allows for all-to-all connectivity for two-qubit gates under certain constraints. <|MaskedSetence|> While functionally equivalent (e.g. [25, 26]), shuttling can impact circuit execution speed and fidelity. <|MaskedSetence|>
|
**A**: Neutral-atom quantum computers have demonstrated the ability to operate on thousands of qubits (e.g. [23]).
**B**: Trade-off analyses suggest that shuttling is a viable substitute for low-fidelity SWAP gates, but not yet a replacement for high-fidelity ones [26].
.
**C**: This effectively emulates 3D architectures even within a 2D layout.
Recent experiments with neutral-atom devices (e.g. [24]) have spurred interest in scalable compilation methods that exploit qubit shuttling to reduce reliance on SWAP gates.
|
ACB
|
ACB
|
ACB
|
ABC
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> This simulator uses first-principles modeling based on the Bloch equations to implement a discrete-event simulation of NMR signal production and realistically models noise and partial volume effects of the image production process. Building these simulators requires physical modeling of MR imaging. Our work in contrast explores the DL-based method to learn these non-linearities that govern the re-parameterization of MR scans from one parameter to another parameter of our choice.
.
|
**A**: Brainweb is a Simulated Brain Database generated using an MRI simulator, developed at the McConnell Brain Imaging Centre.
**B**:
In contrast, recent works employ advanced platforms such as MRiLab [7] and Brainweb [13], which rely on biophysical models that use complex non-linearities to estimate MR images in different parameters.
**C**: MRiLab is an MR image simulator equipped with the generalized multi-pool exchange model for accurate MRI simulations.
These works also utilize the multi-pool modeling capabilities of MRiLab to simulate the effects of fat-water interference in macromolecular-rich tissues and validate them in a physical phantom.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 4
|
The simulation results are summarized in Fig. 4. It clearly shows that our method can achieve comparable results with the FEM. <|MaskedSetence|> To capture the dynamics of the evolution of the thin diffuse interface, the mesh size should be much smaller than ϵitalic-ϵ\epsilonitalic_ϵ, the width of the diffuse interface. Traditional numerical methods often use an adaptive or moving mesh approach to overcome this difficulty. In contrast, a neural network-based numerical scheme has a mesh-free feature. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Numerical simulation of phase-field type models is often challenging.
**B**: Consequently, the dimension of the optimization problem in the NN-based scheme is much smaller than in the FEM scheme without using adaptive or moving meshes.
.
**C**: The number of parameters of the neural network can be much smaller than the number of samples needed to resolve the diffuse interface.
|
ACB
|
ACB
|
ACB
|
CAB
|
Selection 3
|
We evaluate the effect of variable ordering on 16 discrete networks ranging from 8 to 109 variables. Many of these networks are widely used in the literature as case studies to evaluate structure learning algorithms. Table 1 lists them and their key characteristics. The Sports, Property, Formed and Diarrhoea networks are available from the Bayesys repository [10], and the rest are obtained from the bnlearn repository [34]. The network definitions specify the causal structure and the Conditional Probability Tables (CPTs)111Some CPTs are modified for six variables in Water, two variables in Barley, two variables in Win95pts and six variables in Pathfinder to reduce the risk of single-valued variables at low sample sizes. The probability modifications across these variables are never larger than 0.003. The purpose of this is to reduce the risk of generating single-state variables at lower sample sizes. <|MaskedSetence|> The algorithms studied here do not support such datasets. for the discrete variables. We use these definitions to generate synthetic datasets with sample sizes of {10, 20, 40, 50, 80, 100, 200, …., 8x106, 107}. The default variable ordering of these datasets is alphabetic. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Single-state variables are where the variable takes the same value for all rows in the dataset.
**B**: The final column shows what fraction of the arcs are reversible, that is, those edges which are undirected in the CPDAG.
.
**C**: Table 1 presents the number of variables and arcs in each network, as well as mean and maximum degree and in-degree.
|
ACB
|
ACB
|
CAB
|
ACB
|
Selection 2
|
<|MaskedSetence|> Hladký, Král, and Norine [13] confirmed a conjecture of Baker [2] relating the ranks of a divisor on a graph and on a tropical curve, and provided a purely combinatorial algorithm for computing the rank of a divisor on a tropical curve, which can be considered simply as a metric graph, see [18, 12]. For multigraphs with a constant number of vertices, Manjunath [17] gave a polynomial time algorithm that computes the rank of a divisor. Furthermore, by a corollary of the Riemann-Roch theorem, the rank can be computed in polynomial time for divisors of degree greater than 2g−22𝑔22g-22 italic_g - 2, where g𝑔gitalic_g denotes the cyclotomic number of the graph (also called the genus). <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Cori and le Borgne [10] provided a linear time algorithm for determining the rank of a divisor on a complete graph..
**B**: Baker and Shokrieh [4] provided an algorithm that can efficiently check whether the rank of a divisor on a graph is at least c𝑐citalic_c for any fixed constant c𝑐citalic_c.
**C**:
Previous work
From the positive side, Luo [16] introduced the notion of rank-determining sets of metric graphs, and verified the existence of finite rank-determining sets constructively.
|
CBA
|
ACB
|
CBA
|
CBA
|
Selection 3
|
In the realm of ANNs, the Squeeze and Excitation (SE) block, introduced by Hu et al. [15], has proven to be a highly effective module for enhancing representation. The SE block can be seamlessly incorporated into a network, requiring only a minimal increase in parameters to recalibrate channel information. By employing squeezing and fully connecting operations, it allows the network to learn a trainable scale factor for each channel. <|MaskedSetence|> Recently, Yao et al. [14] extended the application of the SE block to SNNs by formulating a temporal-wise attention mechanism. This innovative approach enables SNNs to identify critical temporal frames of interest without being adversely affected by noise or interference. By incorporating temporal-wise attention, the proposed technique achieves state-of-the-art performance across various datasets. This accomplishment serves as compelling evidence for the immense potential of attention mechanisms within SNNs. The utilization of SE blocks and the introduction of temporal-wise attention represent significant advancements in the field of SNNs. <|MaskedSetence|> In the Sec. <|MaskedSetence|>
|
**A**: These techniques not only enhance the representation capability of SNNs but also offer insights into effectively leveraging attention mechanisms for improved performance.
**B**: III, we aim to explore and further leverage these attention mechanisms to improve the performance of SNNs and unlock their full potential in complex temporal data processing tasks.
.
**C**: This recalibration process significantly improves the discriminative power of individual channels.
|
CAB
|
BCA
|
CAB
|
CAB
|
Selection 3
|
<|MaskedSetence|> Based on the observation that owning certain tokens is a necessary precondition for invoking some actions, FlashSyn prunes symbolic actions vectors that contain actions requiring tokens666Note a token can be standard tokens (ERC20, BEP20), or any other forms of tokens such as debt tokens or share tokens. not owned by the attacker. For example, in Harvest USDC example, invoking withdraw method requires users own some share tokens (fUSDC) beforehand. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: The only action candidate that mints fUSDC for users is deposit; thus, this heuristic mandates that deposit must be called before invoking withdraw.
**B**:
Heuristic 3: necessary preconditions.
**C**: This heuristic establishes a necessary but not sufficient condition to ensure that synthesized attack vectors will not be reverted.
.
|
ACB
|
BAC
|
BAC
|
BAC
|
Selection 2
|
To visualize the advantage of our approach, consider the HalfCircle domain in Figure 1, adapted from [3]: a 2-dimensional agent must navigate to a goal, located somewhere on the half-circle. A task therefore corresponds to the goal location, and the task distribution is uniform on the 1-dimensional half-circle. <|MaskedSetence|> Intuitively, however, it is the low-dimensional structure of the task distribution that should determine the difficulty of the problem, and our bounds can indeed account for such. We remark that many benchmark meta RL task distributions exhibit similar low dimensional structure [4; 7; 3].
To complement our theoretical results, we demonstrate the practical potential of our approach by incorporating it in the VariBAD meta-RL method of Zintgraf et al. <|MaskedSetence|> In VariBAD, a variational autoencoder (VAE) is used to learn a low-dimensional latent representation of the task distribution, and a deep neural network is trained to approximate the Bayes optimal policy. <|MaskedSetence|>
|
**A**: We show that by estimating the task distribution using our kernel density estimation method, applied to the VAE latent space, and training the policy on sampled tasks from this distribution, we improve the generalization of the policy to tasks not seen in the training data.
.
**B**: The bounds in [33] depend on the number of states and actions, which are continuous here, and even if discretized, would result in excessive bounds.
**C**: [39].
|
CBA
|
BCA
|
BCA
|
BCA
|
Selection 4
|
We have two motivations for driving this study. The first is the underdevelopment of automatic term recognition (ATR) methods for identifying CHV across languages. <|MaskedSetence|> Unfortunately, most studies expand English vocabulary and rely on language-specific features that barely apply to other languages. <|MaskedSetence|> <|MaskedSetence|> Considering the initiative proposed by the World Health Organization, “Bridging the Language Divide in Health111http://dx.doi.org/10.2471/BLT.15.020615”, there is still a large population of laypeople using their native languages to share and search for health information. Nowadays, language is a primary barrier for laypeople to search relevant health information on the Internet [25, 26].
.
|
**A**: In order to recognize the expression evolution and enrich the original OAC CHV [17], several studies propose various monolingual ATR methods such as applying string patterns [7, 18, 10, 12], co-occurrence analysis [9], machine learning models [7, 11], and word embedding techniques [13, 14].
**B**: Although English dominates the health information on the Internet, only 379 million people, which represent 4.9% of the world population, are English native speakers.
**C**: Besides, most cross-lingual ATR methods rely on costly resources, such as parallel corpus [19] and manual annotations [20, 21, 22, 23, 24], which are labor-intensive to build up.
The second motivation is that the lack of cross-lingual CHV prevents the development of consumer-oriented healthcare applications for non-English languages.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 4
|
The data that support the findings of this study are openly available. <|MaskedSetence|> AGw, , and CelebA dataset from Ref. cel, in accordance with the Terms of Service of the respective web resources. <|MaskedSetence|> IV.2 and the trajectory is available at figshare.com/articles/dataset/Black-box_models_for_TERP_interpretation/24475003. <|MaskedSetence|>
|
**A**: Data generation details for the molecular dynamics of alanine dipeptide are provided in Sec.
**B**: The AG’s news corpus dataset was obtained from Ref.
**C**: Source data is provided with this paper as additional material.
.
|
BAC
|
BAC
|
BAC
|
BCA
|
Selection 2
|
GTFuzz (Li et al., 2020) is a tool that prioritizes inputs based on extracting syntax tokens that guard the target place The backward static analysis technique extracts these tokens. Also, GTFuzz benefits from this extraction by improving the mutation algorithm. Smart grey-box fuzzing (SGF) (Pham et al., 2019) is a fuzzer that employs high-level structural representation on the original seeds to generate high-impact seeds. Similarly, AFLSmart (Pham et al., 2019) is a structure-aware fuzzer that combines the PEACH fuzzing engine with the AFL fuzzer. <|MaskedSetence|> It analyses the CFG of the PUT in an attempt to instrument fewer code blocks, thereby speeding up the fuzzing. AutoFuzz (Gorbunov and Rosenbloom, 2010) is a tool that utilizes fuzzing to verify network protocols. <|MaskedSetence|> It is frequently used to detect security vulnerabilities in input validation and application logic. Various other fuzzers and fuzzing techniques have been developed, each with unique features. <|MaskedSetence|> SYMFUZZ (Cha et al., 2015) controls the selection of paths, while Alexandre Rebert’s approach (Rebert et al., 2014) uses guided seed selection.
One of the weaknesses of pure fuzzing approaches is their inability to find test-cases that explore program code beyond complex guards, as they essentially work by randomly mutating seeds and, therefore, struggle to find inputs that satisfy the guards.
.
|
**A**: For example, directed greybox fuzzing (Böhme et al., 2017b) uses simulated annealing in an attempt to guide the fuzzer to explore a particular section of the program under test.
**B**: Instrim (Hsu et al., 2018) is a Control Flow Graph (CFG) -aware fuzzer.
**C**: It begins with finding the protocol specifications and then finding vulnerabilities by using fuzzing Also, there is Peach Fuzzer (Zhang et al., 2015) is an approach that sends random input to a PUT in an attempt to find security vulnerabilities.
|
ABC
|
BCA
|
BCA
|
BCA
|
Selection 3
|
We consider our method to be the successor of that of Bhrawy and Zaky [7]. They applied a change of variables to classical Jacobi polynomials such that the algebraic singularities of the resulting basis, the JFP basis (which is called thus for reasons we explain in Section 3), conform to those of the solution333The method of Bhrawy and Zaky can be considered to be an extension of that of Kazem, et al. [32], which used Legendre polynomials in fractional powers as basis functions.. <|MaskedSetence|> <|MaskedSetence|> We believe this approach has not been sufficiently analyzed nor developed to its full potential. <|MaskedSetence|> We shall refer to these matrices as fractional integration matrices. We also illustrate that the algorithms for fractional integration matrices (including the one used by Bhrawy and Zaky) are unstable but can be “pseudo-stabilized” by integrating high-precision computing [5] automatically such that the algorithms output accurate results. We also investigate empirically the performance of the parameter-dependent JFP method for a wider range of parameters than those considered in [7].
.
|
**A**: Therefore, it can potentially be used to solve FIEs and FDEs just as classical orthogonal polynomials are used to solve ODEs.
**B**: In this paper, we develop new algorithms for computing the entries of matrices that represent the action of fractional integration operators on the JFP basis.
**C**: The JFP basis inherits many desirable properties of classical Jacobi polynomials, including orthogonality, see [7].
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 2
|
<|MaskedSetence|> For robustness check, we here repeat the analysis for 10% random removal. The similar AUC results (the Fig. S32 is similar to Fig. S2, the Fig. S31 is similar to Fig. S3) are obtained. <|MaskedSetence|> S33 is similar to Fig. S14, the Fig. S34 is similar to Fig. S17) are obtained. <|MaskedSetence|>
|
**A**: Analogously, the similar precision results (the Fig.
**B**: In the main text, we show the analysis for 20% random removal links.
**C**: These show that our quantitative framework is not affected by the random removal percentage.
.
|
BAC
|
BAC
|
BAC
|
CBA
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We also include VWW dataset [20], a widely used benchmark for tinyML applications. We train on VWW for 10 epochs following [47]. We used resolution 128 for all datasets and models for a fair comparison.
.
|
**A**: We follow [12] to use a set of vision datasets including Cars [39], CIFAR-10 [40], CIFAR-100 [40], CUB [68], Flowers [54], Food [9], and Pets [55]‡‡‡Pets uses CC BY-SA 4.0 license; Cars and ImageNet use the ImageNet license; others are not listed..
**B**: We fine-tuned the models
on all these datasets for 50 epochs following [12].
**C**: Datasets.
We measure the transfer learning accuracy on multiple downstream datasets and report the average accuracy [37].
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 1
|
Such an established and generally accepted set of benchmarks is clearly missing for permutation-based EAs, which might be one of the reasons why this part of EA theory is less developed. To overcome this shortage, and to do this in a natural and systematic manner, ideally profiting to the maximum from the work done already for EAs using bit-string representations, we now propose a simple way to transform benchmarks for pseudo-Boolean optimization into permutation-based problems. We are sure that future work on permutation-based EAs will detect the need for benchmarks which cannot be constructed in this way, but we are confident that our approach sets a good basis for a powerful sets of benchmarks for permutation-based EAs.
We note that there are different classes of permutation-based problems. In problems of the assignment type, we have two classes of n𝑛nitalic_n elements and the task is to assign each member of the first class to a member of the second in a bijective fashion. <|MaskedSetence|> In problems of the order type, we have precedence relations that must be respected or that are profitable to be respected. Such problems occur in production scheduling, where a given set of jobs have to be placed on a given machine in an optimal order. <|MaskedSetence|> <|MaskedSetence|> We note that the order and adjacency types were, also under these names, already described in [ES15, p. 68]. Due to the different nature of these types of problems, it appears difficult to define benchmarks that are meaningful for all types. We therefore restrict ourselves to defining benchmarks that appear suitable for the assignment type..
|
**A**: The travelling salesman problem is the classic hard problem of this type, the Eulerian cycle problem is a polynomial-time solvable example.
**B**: Finally, in problems of the adjacency type, it is important that certain items are placed right before another one (possibly in a cyclic fashion).
**C**: The quadratic assignment problem or the stable marriage problem are examples for this type.
|
CBA
|
CBA
|
CBA
|
BAC
|
Selection 2
|
<|MaskedSetence|>
ABPG vs. <|MaskedSetence|> <|MaskedSetence|> The Riemannian gradient descent achives lower objective values due to proper step-size selection (line search). For the more homogeneous phantoms corresponding to the right column, the proposed method significantly outperforms ABPG [HRX21]..
|
**A**: The left column corresponds to the first three phantoms: gear, bone, vessel in Figure 5.1, the right columns to the last three: batenburg, roux, skulls.
**B**:
Figure 5.2.
**C**: Riemannian gradient (RG) descent in terms of relative objective values is compared on 2% and 20% undersampled projection data.
|
BCA
|
BCA
|
BCA
|
ABC
|
Selection 1
|
<|MaskedSetence|> One reason for such a belief is that, for non-monotone networks, depth 2222 suffices to ensure universality. Any continuous function over a bounded domain can be approximated by a depth-2222 network [3, 11, 22] and this universality result holds for networks with threshold or ReLU as activation functions. <|MaskedSetence|> <|MaskedSetence|> Thereafter, a simple argument shows that monotone networks of bounded depth are universal approximators of monotone functions. As noted, this is in sharp contrast to general neural networks, where adding extra layers can affect the efficiency of the representation [16], but does not change the expressive power.
.
|
**A**:
Given the above result, it may seem that, similarly to the case of monotone networks with ReLU activations, the class of monotone networks with threshold activations is too limited, in the sense that it cannot approximate any monotone function with a constant depth (allowing the depth to scale with the dimension was considered in [12], see below).
**B**: We establish a depth separation result for monotone threshold networks and show that monotone networks can interpolate arbitrary monotone data sets by slightly increasing the number of layers.
**C**: Our first main result supports the contrary to this belief.
|
ACB
|
ACB
|
CBA
|
ACB
|
Selection 1
|
However, since discontinuous Galerkin methods are non-conforming, we cannot directly apply the existing QMC theory.
This paper tries to bridge this theoretical gap. It is structured as follows. <|MaskedSetence|> <|MaskedSetence|> DG in the QMC framework is presented in Section 5 and the corresponding parametric regularity analysis is considered in Section 6. <|MaskedSetence|>
|
**A**: Numerical results, which confirm our theoretical findings, are given in Section 7, while a short conclusion wraps up our exposition.
.
**B**: Section 3 describes randomly shifted lattice rules, and Section 4 gives a brief overview over the analysis of conforming FE methods.
**C**: Notations and preliminaries are introduced in Section 2.
|
CBA
|
CBA
|
CBA
|
CAB
|
Selection 1
|
The key to the analysis for each individual arm is that, because of the interleaved mean structure, Alice misses most information of half of the terms held by Bob. Without this information, her adaptivity cannot help much in the task of identifying which arm is more likely to be the best arm. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The adaptivity of the algorithm further complicates the description of the posterior distribution of the arms after one round of pulls. Fortunately, our implicit representation of the distribution classes is flexible enough to handle this additional complexity.
.
|
**A**: Despite appearing natural, it is highly non-trivial to put this intuition into a formal proof since we need to carefully bound the “help” of the historical information exchange.
**B**: On the other hand, the time budget constraint also prevents Alice from extracting and revealing to Bob too much information about her local means of arms which are not published in the next round (see the algorithm Arm Publishing and Additional Pulls in Section 11 for details on how we publish arms).
**C**: A similar argument holds for Bob.
|
BCA
|
BCA
|
CAB
|
BCA
|
Selection 2
|
<|MaskedSetence|> Unlike the existing randomized low tubal rank approximation methods, the proposed algorithm finds an optimal tubal rank and the corresponding low tubal rank approximation automatically given a data tensor and an approximation error bound. Simulations on synthetic and real-world data-sets confirmed that the proposed algorithm is efficient and applicable. <|MaskedSetence|> <|MaskedSetence|> A detailed theoretical analysis of the proposed algorithm needs to be investigated. This is also our future work.
VIII Acknowledgements.
|
**A**: In this paper, we proposed a new randomized fixed-precision algorithm for fast computation of tensor SVD (t-SVD).
**B**: This will be our future work and we are planning to use it to develop fast tensor completion algorithms similar to the strategy done in [52].
**C**: This algorithm can be generalized to higher order tensors according to the paper [46].
|
ACB
|
ACB
|
BAC
|
ACB
|
Selection 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.