text_with_holes
stringlengths 114
4.02k
| text_candidates
stringlengths 58
3.84k
| A
stringclasses 6
values | B
stringclasses 6
values | C
stringclasses 6
values | D
stringclasses 6
values | label
stringclasses 4
values |
|---|---|---|---|---|---|---|
The feature-weighted semi-supervised method (SSL-PCT-FR) and the non-feature-weighted one (SSL-PCT) have similar trends in predictive performance. <|MaskedSetence|> <|MaskedSetence|> On the other hand, feature weighting clearly damages the predictive performance of the SSL-PCT method on the Bibtex dataset. Thus, feature weighting helps in most cases, but the empirical results cannot support its use by default when building SSL-PCTs for MLC.
We next compare semi-supervised random forests (SSL-RF) with supervised random forests (CLUS-RF). From the results, we can observe that CLUS-RF improves over CLUS-RF on several datasets: Bibtex, Corel5k, Genbase, Medical, SIGMEA real, and marginally on Emotions and Enron datasets. <|MaskedSetence|> In other words, the improvement of SSL-PCTs over SL-PCTs does not guarantee the improvement of SSL-RF over CLUS-RF (e.g., Mediana, and Yeast datasets), and vice versa, SSL-RF can improve over CLUS-RF even if SSL-PCTs does not improve over SL-PCTs (e.g., SSL-RF-FR on Emotions dataset for 200 and 350 labeled examples). As observed for the single trees, there is no clear advantage to using feature weighting when semi-supervised random forests are built, even though it is somewhat helpful on the Emotions and Enron datasets.
.
|
**A**: Namely, on Birds and Scene datasets, feature weighting is beneficial for the predictive performance of SSL-PCTs, and even necessary for improvement over SL-PCTs on the Birds dataset with ≥\geq≥ 350 labeled examples.
**B**: However, on some datasets, there are notable differences.
**C**: However, as compared to single trees, the improvements of the semi-supervised approach over the supervised are observed on fewer datasets and are smaller in magnitude.
|
BAC
|
BAC
|
BAC
|
BCA
|
Selection 3
|
<|MaskedSetence|> It can be achieved by performing various vision tasks such as semantic segmentation, depth estimation, and object detection [17][18][19][20]. In the autonomous driving area, Hahner et al. proposed a segmentation model that is made specifically to deal with foggy conditions [21]. Then, different work is proposed by Rajaram et al. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: [22] where a model called RefineNet is used to perform object detection.
**B**: II-A Perception-Action Coupling
Among various approaches in the field of autonomous driving, perception has always been the first stage as it is important to understand the surrounding area before planning and action.
**C**: Besides completing a single vision task, the model can be pushed further to perform multiple vision tasks simultaneously to achieve a better scene understanding [23][24].
.
|
BAC
|
BAC
|
BAC
|
ACB
|
Selection 2
|
<|MaskedSetence|> Treewidth plays a fundamental role in the design of exact and approximation algorithms on planar graphs (and more generally, H𝐻Hitalic_H-minor-free graphs) [18, 3, 28].
The main property of such graphs is that they enjoy the bounded local treewidth property. In other words, any planar graph of a small diameter has a small treewidth. <|MaskedSetence|> However, even for very “simple” objects like unit disks, the corresponding intersection graphs do not have locally bounded treewidth. On the other hand, in many scenarios, the treewidth-based methods on such graphs could be replaced by tree decompositions of bounded independence number. <|MaskedSetence|> Galby, Munaro, and Yang [25] use Theorem 1.1 for obtaining polynomial-time approximation schemes for several packing problems
on geometric graphs. It is interesting to note that algorithms on geometric graphs often require geometric representation of a graph. Sometimes, like for unit disk graphs, finding such a representation is a challenging computational task [29]..
|
**A**: In particular, de Berg, Bodlaender, Kisfaludi-Bak, Marx, and van der Zanden
use tree decompositions whose bags are covered by a small number of cliques, and thus of small independence number, to design subexponential-time algorithms on geometric graph classes [17].
**B**:
Theorem 1.1 appears to be a handy tool in the subarea of computational geometry concerning optimization problems on geometric graphs.
**C**: A natural research direction is to extend such methods to intersection graphs of geometric objects [24, 34].
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 3
|
<|MaskedSetence|> As shown in Fig. <|MaskedSetence|> The red point is the central node, into which we add the above noise. <|MaskedSetence|> In Fig. 4, we observe that, in the yellow node’s neighborhoods, the number of nodes selected to be updated decreases as the level of noise increases when using our method. This indicates that the noisy information could be prevented to some extent by our method, such that the negative influence can be lowered, and thus our method is robust to noise.
.
|
**A**: 4, the variance σ2superscript𝜎2\sigma^{2}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT of the Gaussian noise is set to 0,0.01,0.0300.010.030,0.01,0.030 , 0.01 , 0.03, and 0.10.10.10.1 from left to right.
**B**: The noisy information would be blindly propagated to the yellow node and its all neighborhoods if using previous methods.
**C**: In order to intuitively understand our reinforced neighbor selection module, we design a robustness visualization experiment by showing the actions output by the policy network under different levels of noise added to the UCI dataset.
|
CAB
|
CAB
|
CBA
|
CAB
|
Selection 4
|
Xuqian Ren
received the B.E. <|MaskedSetence|> degree from Beijing Institute of Technology in 2022. <|MaskedSetence|> <|MaskedSetence|> Her current research interests include contrastive learning, image generation and 3d reconstruction.
.
|
**A**: candidate with Computer Science Unit, Faculty of Information Technology and Communication Sciences, Tampere Universities, Finland.
**B**: degree from University of Science and Technology Beijing in 2019, received the M.S.
**C**: She is currently a Ph.D.
|
BCA
|
CBA
|
BCA
|
BCA
|
Selection 4
|
The value-based algorithm Q-learning, a common unit of the dialogue policy module, suffers the overestimation bias (Thrun and Schwartz, 1993; Hasselt, 2010). Prior studies addressed the problem in multiple ways, including (1) bias compensation with additive pseudo costs and (2) a variety of estimators. Bias-corrected Q-Learning (Lee et al., 2013) subtracts a quantity from the target but this method cannot address the bias from the function approximation (Pentaliotis and Wiering, 2021). It is known that the bias compensation method is labor involved and time consuming (Anwar and Patnaik, 2008; Lee and Powell, 2012). <|MaskedSetence|> Since underestimation bias is not preferable (Hasselt, 2010; Lan et al., 2020), Weighted Q-learning proposes (D’Eramo et al., 2016; Zhang et al., 2017) the weighted estimator for the maximal action value based on a weighted average of estimated actions values. However, the weights computation is only practical in a tabular setting (D’Eramo et al., 2017). Our work differs from the foregoing in that it proposes a new estimator which could be generalized into the deep Q-learning network setting.
Overestimation bias is more problematic in the deep Q-learning network (DQN) algorithm (Fan et al., 2020) due to the function approximation errors of DRL. Polishing estimation tricks of a single model and using ensemble models are two mainstream solutions. Double Q-learning is subsequently adapted to a neural network as Double DQN (Van Hasselt et al., 2016), and Duel DQN proposes a new action value estimation scheme (Wang et al., 2016). But the two methods still suffer the bias of double estimator and maximum estimator, respectively. Another approach against overestimation bias is based on the idea of ensembling. Averaged DQN controls the estimation bias by taking the average over action values of multiple target networks (Anschel et al., 2017). Later, (Lan et al., 2020) claims that an average operation will never completely remove the overestimation bias, and they propose the Maxmin DQN which takes a minimum from multiple maximums of different ensemble units to estimate the maximum action value in a selective process. <|MaskedSetence|> Recently, the SUNRISE method uses the uncertainty estimates of the ensemble. But it only down-weights the biased estimation (Lee et al., 2021). In this work, the model only uses a value function instead of a combination of multiple value functions and tailors the predicted maximum and minimum of a value function to approximate the optimal action value. <|MaskedSetence|>
|
**A**: Then, (Kuznetsov et al., 2020) recognizes that Maxmin DQN also suffers underestimation bias and that the bias control is coarse.
**B**: Double Q-learning (Hasselt, 2010) trades overestimation bias for underestimation bias using the double estimator.
**C**: Our work does not move towards underestimation and avoid the computational complexity of ensemble models.
.
|
BAC
|
BAC
|
BAC
|
BAC
|
Selection 1
|
To appropriately anticipate the future, it is necessary to understand in detail the observed actions. <|MaskedSetence|> Ego4D [13] is the most extensive daily-life egocentric video dataset, currently available to research. It is notable that the Ego4D authors also provide pre-extracted features for each second of video. These features are obtained through SlowFast 8x8 model [10]. <|MaskedSetence|> The Forecasting Benchmark from Ego4D (which includes LTA) consists of 120120120120 hours of annotated videos from 53535353 different scenarios. <|MaskedSetence|> Ego4D has a long-tailed distribution both for nouns and verbs categories, resulting in a high imbalanced dataset.
.
|
**A**: Due to the recent publication of the Ego4D dataset, only baselines results are provided for comparison.
**B**: The annotations provided contain 478478478478 noun types and 115115115115 verb types, with a total amount of 4756475647564756 action classes among training and validation set.
**C**: Human Action Recognition (HAR) from video is itself a large computer vision research field, with increasing interest over egocentric view datasets [7, 13].
|
ACB
|
CAB
|
CAB
|
CAB
|
Selection 2
|
Prior Art.
Most unsupervised time series anomaly detectors use generative models in one-class learning to restore input data [3, 6, 4, 7] or forecast future data [2, 8, 9]. Data normality is implicitly learned behind the rationale of reconstruction or prediction. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> They may also wrongly discard some normal samples of the boundary that are informative and vital in learning data normality.
On the other hand, these methods do not consider information related to the anomaly class when performing their normality learning process. It is difficult to learn accurate normality without knowing what the abnormalities are..
|
**A**: Nonetheless, these additional components themselves might be biased by the anomaly contamination, leading to high errors in the pseudo labeling or anomaly removal.
**B**: The abnormal degrees of observations in time series can be measured according to loss values.
**C**: To achieve a comprehensive delineation of data normality (e.g., deeper inter-metric correlations, longer-term temporal dependence, and more diverse patterns), these methods design advanced network structures (e.g., using variational Autoencoders [7, 10], graph neural networks [2, 9], and Transformer [5, 3]) and new reconstruction/prediction learning objectives (e.g., adding adversarial training [11, 12, 4], ensemble learning [13, 6], and meta-learning [3]).
However, these current methods generally do not have components to deal with the anomaly contamination issue.
There are a few attempts to address this problem, e.g., using pseudo-labels via self-training [14, 15, 16] or an extra pre-positive one-class classification model [12] to remove plausible anomalies in the training set.
|
BCA
|
BCA
|
BCA
|
ACB
|
Selection 3
|
Seq2Seq models (see §3.3), serve as the basis for neural NLG (Sutskever et al., 2011; Cho et al., 2014; Vaswani et al., 2017). As such, to compare the efficacy of neural architectures for long-form D2T, Wiseman et. al. (Wiseman et al., 2017) compare the performance of various seq2seq models to their templated counterparts on the RotoWire dataset. Based on their observations, the conditional copy model (Gulçehre et al., 2016) performs the best on both word-overlap and extractive metrics (see §6.2.1) compared to the standard attention-based seq2seq model (Bahdanau et al., 2015) and its joint copy variant (Gu et al., 2016). Similarly, in an evaluation of 62 seq2seq, data-driven, and templated systems for the E2E shared task, Dušek et. al. (Dušek et al., 2018) note that seq2seq systems dominate in terms of both automated word-based metrics and naturalness in human judgement. Wiseman et. al. (Wiseman et al., 2017), however, note that the traditional templated generation models outperform seq2seq models on extractive metrics although they score poorly on word-overlap metirics. Thus, the adaptation of seq2seq models to D2T for richer narratives with less omissions and hallucinations still remains an active focus of the research community.
It is worth noting that all seq2seq models discussed below operate at the word level. <|MaskedSetence|> However, the attention garnered by them from the research community is slim. <|MaskedSetence|> al. <|MaskedSetence|> In the sections that follow, we detail notable innovations over the last half-decade in seq2seq modeling, branched on the basis of their training strategies - supervised and unsupervised learning..
|
**A**: (Jagfeld et al., 2018) note that as character-based models perform better on the E2E dataset while word-based models perform better on the more linguistically challenging WebNLG dataset, it is hard to draw conclusions on the framework most suited for generic D2T.
**B**: Models operating at the character level (Goyal et al., 2016; Agarwal and Dymetman, 2017; Roberti et al., 2019) have shown reasonable efficacy with the added computational savings from forgoing the preprocessing steps of delexicalization and tokenization.
**C**: From their comparative analysis, Jagfeld et.
|
BCA
|
ACB
|
BCA
|
BCA
|
Selection 3
|
<|MaskedSetence|> is exactly the same, i.e., all the manually annotated positive predicate labels are of high quality. Different from other closed-set classification tasks, where each sample corresponds to a unique ground-truth, some specific subject-object pairs in SGG may have multiple reasonable predicates, i.e., the semantics of different predicate categories are interdependent to some extent. Inevitably, this phenomenon has resulted in two annotation characteristics in SGG datasets: 1) Common-prone: When the semantic granularities of these reasonable visual relations are different, the annotators tend to choose the commonest (or coarse-grained) predicate as the ground-truth label. As shown in Fig. 1(a), both riding and on are “reasonable” for the relation between man and bike, but the annotated ground-truth predicate is the less informative on instead of more convincing riding. And this characteristic occurs frequently in the SGG datasets (cf., more examples in Fig. 1(a)). <|MaskedSetence|> For example in Fig. 1(b), both has and with denote meaning “be dressed in” for man/woman and shirt, but the ground-truth annotations are inconsistent even in the same image. We further visualize thousands of sampled instances1 of ⟨⟨\langle⟨man-has/with-shirt⟩⟩\rangle⟩ in VG, and these instances are all randomly distributed in the feature space (cf., Fig. 1(b)). <|MaskedSetence|>
|
**A**:
For the first assumption, by “equally”, we mean that the confidence (or quality) of an annotated ground-truth predicate label for each positive sample111We use “sample” to represent the triplet instance interchangeably, and we use “instance” to denote an instance of visual relation triplet.
**B**: Thus, we argue that all the positive samples are NOT equally correct, i.e., a part of positive samples are not high-quality — their ground-truth labels can be more fine-grained (cf., common-prone) or more consistent (cf., synonym-random).
.
**C**: 2) Synonym-random: When these reasonable visual relations are synonymous for the subject-object pair, the annotators usually randomly select one predicate as the ground-truth label, i.e., the annotations of some similar visual patterns are inconsistent.
|
CBA
|
ACB
|
ACB
|
ACB
|
Selection 4
|
It is impractical to launch selfish mining by a single miner for large-scale public blockchains. On the one hand, it is hard to occupy more than 33% of mining power to ensure successful selfish mining, e.g., Bitcoin. <|MaskedSetence|> On the other hand, it is a general consensus that a large pool is not in the best interest of the blockchain community. Miners are vigilant when a mining pool controls a large fraction of computation power and may unite to boycott the large mining pool [MinerBoycott] and force the pool manager to cut down the mining power.
The idea of colluding with other rational miners and forming a coalition seems to be an applaudable way to mine selfishly. <|MaskedSetence|> <|MaskedSetence|> Rational miners cannot know whether it is profitable to join the attacker’s branch, nor have the assurance of extra profit from attackers.
.
|
**A**: The overall mining power of the coalition may be enough to launch selfish mining.
**B**: According to [blockexplorer.com], the largest mining pool in Bitcoin only occupies 16% of overall mining power.
**C**: In practice, it is challenging to build trust among rational miners.
|
BAC
|
BAC
|
BAC
|
CBA
|
Selection 1
|
We now move beyond the full-batch setting to the more general setting of minibatch training. <|MaskedSetence|> <|MaskedSetence|> Yet additional factors seem to also be at play in the case of SGD. Specifically, it has been observed that at small batch sizes (where there is more gradient noise), the mid-training sharpness is smaller. One hypothesized explanation [43, 21] is that in the presence of gradient noise, SGD becomes unstable at lower sharpnesses.
Figure 6: During minibatch Adam, the preconditioned sharpness behaves analogous to the sharpness during minibatch SGD. We train a Resnet-50 on ImageNet using minibatch Adam. The preconditioned sharpness is (1) below the full-batch stability threshold (pictured as a horizontal line), (2) smaller when the batch size is smaller, and (3) smaller when the learning rate is larger. <|MaskedSetence|>
|
**A**: In the case of gradient descent and minibatch SGD, it is clear from prior work [20, 21] that during minibatch training, the sharpness is subject to similar effects as during full-batch training.
For one, provided that training is successful, the sharpness never ventures more than a bit above the stability threshold of the corresponding full-batch algorithm [21].
**B**: This can be explained by the fact that SGD is unstable in expectation whenever GD is unstable [14].
**C**: .
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> In their study, CLN025 was shown to be a ‘‘fast folder’’ with the corresponding free-energy barrier of ∼10similar-toabsent10{\sim}10∼ 10 kJ/mol. Similar estimates have also been obtained in Ref. 77. Therefore, we can conclude that the free-energy barriers in the embeddings agree well with previous computational studies.
Note that the simulation of CLN025 performed in Ref. 76 is ∼100similar-toabsent100{\sim}100∼ 100 μ𝜇\muitalic_μs long compared to our 1-μ𝜇\muitalic_μs simulation. <|MaskedSetence|>
|
**A**: Comparing the free-energy barriers between the different embeddings in Fig. 4, we can see that they are similar, particularly for the mrse embedding and the free-energy surface spanned by the distance and the radius of gyration, i.e., from 10 to 15 kJ/mol.
**B**: This clearly illustrates the great benefit of combining manifold learning with the ability to learn from biased data sets.
.
**C**: We can compare our results to the unbiased simulation data from the study of Lindorff-Larsen et al. 76 where the authors perform a very long simulation and observe a significant number of folding and unfolding events, thus allowing unbiased estimates of free-energy barriers to be obtained.
|
ACB
|
CAB
|
ACB
|
ACB
|
Selection 3
|
Additional Removal of Short segments
In Fig. 6c there is an instance in which a long segment appears to give rise to exactly one other vessel – it can be seen in the lower right of the panel. <|MaskedSetence|> <|MaskedSetence|> The short segment here contains three nodes while its sibling contains 21. <|MaskedSetence|> Doing so with this tree reduces the number of segments to 14 organised into 5 generations. The final tree is shown in Fig. 6d.
.
|
**A**: Upon closer inspection of the tree, we see that these are bifurcations in which one branch is much shorter than the parent or sibling branches.
**B**: However, this cannot be the case as these were previously removed.
**C**: Once the short segment has been removed, eligible segments can be joined in series.
|
BCA
|
BAC
|
BAC
|
BAC
|
Selection 4
|
<|MaskedSetence|> On the other hand, extrapolation towards out-of-distribution data is a fundamental challenge for most neural networks. For out-of-distribution attributes, the extrapolation ability relies more on the embedding and readout function of the GNN architecture. <|MaskedSetence|> Both [16] and [17] have studied on approximating simple arithmetic computation (+++, −--, ×\times×, and ///) when extrapolate towards drifted numerical values. In our proposed model, we utilize building blocks from [16] to encourage the model to learn the arithmetic impact, when tackling drifted numerical input features.
More recent related works focus on learning the tuned graph representation of routing networks. <|MaskedSetence|>
|
**A**: [18] propose RouteNet, which is regarded as a first work that introduces message passing in deep-learning-based network modeling, in a bipartite graph formulation..
**B**: In our model, we leverage the conclusions from the aforementioned works and cast the graph-size-generalization problem to a transferred formulation of graph,
which is considered to be learnable by spectral graph-convolution methods.
**C**: [7] have shown the multi-layered perception models (MLP) can only extrapolate if the test set is expanded from the training set in all geometrical directions in the latent space.
|
BCA
|
ACB
|
BCA
|
BCA
|
Selection 4
|
<|MaskedSetence|> We report the score for each task in Appendix F. In our experiments, we observe that (1) Both SPR and SPR-UCB outperform baselines that do not learn temporal consistent representations significantly, including DER, OTR, SimPLe, CURL, and DrQ. <|MaskedSetence|> In addition, we remark that SPR-UCB outperforms SPR significantly in challenging environments including Boxing, Freeway, Frostbite, KungfuMaster, PrivateEye, and RoadRunner. <|MaskedSetence|>
|
**A**:
We illustrate the aggregated mean of human normalized scores among all tasks in Figure 1.
**B**: Please see Appendix F for the details.
.
**C**: (2) By incorporating the UCB bonus, SPR-UCB outperforms SPR.
|
CBA
|
ACB
|
ACB
|
ACB
|
Selection 2
|
The remainder of this paper is structured as follows. <|MaskedSetence|> <|MaskedSetence|> Next, we state the optimal importance sampling control for the decoupled MV-SDE derived using stochastic optimal control and introduce the DLMC estimator with importance sampling from (Ben Rached et al., 2023) in Section 4. <|MaskedSetence|> We combine the multilevel DLMC estimator with the proposed importance sampling scheme and develop an adaptive multilevel DLMC algorithm that feasibly estimates rare-event quantities associated with MV-SDEs. Finally, we apply the proposed methods to the Kuramoto model from statistical physics in Section 6 and numerically verify all assumptions in this work and the derived complexity rates for the multilevel DLMC estimator for two observables..
|
**A**: Then, we introduce the novel multilevel DLMC estimator in Section 5, develop an antithetic sampler for it, and derive new complexity results for the estimator.
**B**: In Section 2, we introduce the MV-SDE and associated notation, motivate MC methods to estimate expectations associated with its solution and set forth the problem to be solved.
**C**: In Section 3, we introduce the decoupling approach for MV-SDEs (dos Reis et al., 2023) and formulate a DLMC estimator.
|
BCA
|
BCA
|
CBA
|
BCA
|
Selection 1
|
<|MaskedSetence|> Advanced Scientific Concepts, the company responsible for the OSIRIS-REx 3D flash LiDAR used in the GN&C, reports a range error of 5-10 cm for the model “GSFL-16KS Space”555https://asc3d.com/gsfl_16Ks/, in a range below 6 km. <|MaskedSetence|> [27] report for Hayabusa 2 an operational range from 30 m to 25 km, with 1 m to 5.5 m errors at each respective range bound. Although both LiDARs are very different (e.g., Hayabusa 2 LiDAR is not a 3D flash LiDAR), we think it is reasonable to mix both to avoid taking an over-conservative approach. As we will show soon, the range measurements are assumed to be made only by the LiDAR, which is unrealistic. In practice, the optical navigation would complement the range data settling the significance of the LiDAR errors if there is already an onboard shape of the asteroid. <|MaskedSetence|>
|
**A**: Mizuno et al.
**B**: In fact, in some missions, as is the case of OSIRIS-REx, the LiDAR measurements are not even considered in the orbit determination, being more critical for fault detection [54]..
**C**:
We note that this is a conservative approach.
|
CAB
|
CBA
|
CAB
|
CAB
|
Selection 4
|
<|MaskedSetence|> In section 3, we provide an overview of the paper’s main results and state the main theorems. In section 4, we review related literature and existing results on computing Brascamp–Lieb constants. Section 5 provides formal proofs of the paper’s findings. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: We conclude with a general discussion of the paper’s results and limitations, as well as avenues for future investigation, in section 7.
.
**B**: 1.3 Outline
The paper is structured as follows: In section 2 we introduce basic background and notation, including Thompson geometry on the space of positive definite matrices and the class of Brascamp–Lieb inequalities.
**C**: In section 6, we provide a discussion on alternative Picard iterations that could be use to compute Brascamp–Lieb constants.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 1
|
In this study, we propose novel centrality measures that leverage the persistence of homology classes and their merge history along the filtration. <|MaskedSetence|> These homology-based centrality measures produce, for all cycle generators, curves that reflect the relative importance of the corresponding generator throughout its entire evolution. <|MaskedSetence|> Accordingly, we establish some properties that include the stability of these measures under a distance analogous to norms in Lebesgue spaces and persistence landscapes.
2. <|MaskedSetence|>
|
**A**: Integral to this is the development of an algorithm that captures the merge history of homology classes.
**B**: Preliminaries.
**C**: By applying these centrality measures on toy models, we demonstrate the consistency of detected information by these measures to other topological summaries, and highlight its ability to capture new information possibly missed by other summaries.
|
ACB
|
ACB
|
ACB
|
CBA
|
Selection 1
|
Laine et al. (DBLP:conf/iclr/LaineA17, ) develop Temporal Ensembling, which maintains an exponential moving average of label predictions on each training example and penalizes predictions that are inconsistent with this target.
However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this issue, Tarvainen et al. (DBLP:conf/nips/TarvainenV17, ) design Mean Teacher that averages model weights instead of label predictions. <|MaskedSetence|> <|MaskedSetence|> The core of the method is to flexibly adjust thresholds for different classes at each time step to let pass informative unlabeled data and their pseudo-labels.
However, the success of the typical SSL largely depends on the assumption that the labeled and unlabeled data share an identical class distribution, which is hard to meet in the real-world application. <|MaskedSetence|> To bridge this gap, Zhao et al. (Zhao_2022_CVPR, ) put forward a new SSL learning framework, named Distribution Consistency SSL, which rectifies the pseudo-labels from a distribution perspective..
|
**A**: Furthermore, Zhang et al. (DBLP:conf/nips/ZhangWHWWOS21, ) introduce a curriculum learning approach to leverage unlabeled data according to the model’s learning status.
**B**: The distribution mismatch between the labeled and unlabeled sets can cause severe bias in the pseudo-labels of SSL, resulting in significant performance degradation.
**C**: In FixMatch (DBLP:conf/nips/SohnBCZZRCKL20, ), the method first generates pseudo-labels using the model’s predictions on weakly-augmented unlabeled images, and is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image.
|
CAB
|
CAB
|
CAB
|
CBA
|
Selection 3
|
The dense prediction results are summarized in Tab. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> On ConvNeXt-S, the performance is lower than their fine-tuning counterparts. We argue that the inferior performance of Conv-Adapter on ConvNeXt-S on dense prediction tasks is due to severely reduced model capacity as the number of trainable parameters is reduced by more than 50%. Nevertheless, they can still outperform the ResNet50 with fewer total parameters.
This indicates there might be overfitting issues, and we encourage more future studies on this topic.
.
|
**A**: On ResNet50, Conv-Adapter surpasses fine-tuning with fewer trainable parameters (including the dense prediction heads) for object detection and semantic segmentation.
**B**: We observe a different effect of Conv-Adapter on two types of backbones.
**C**: 4.
|
CBA
|
CBA
|
CBA
|
CAB
|
Selection 3
|
Online learning methods enable model updates incrementally from sequential data, offering greater efficiency and scalability than traditional batch learning. <|MaskedSetence|> Online Mirror Descent, an extension of Mirror Descent [41], utilizes a gradient update rule in the dual space, leading to improved bounds. Adaptive subgradient method [42] dynamically adjusts regularization term based on its current subgradient. <|MaskedSetence|> This approach marks the first exploration into simultaneously detecting changepoints and estimating unknown parameters within PDE dynamics based on observed data. We have three main contributions: (i) We introduce a novel strategy that leverages PINNs alongside the Total Variation method for detecting changepoints within PDE dynamics. This approach not only identifies the timing of changes but also facilitates the estimation of unknown system parameters. (ii) We propose an online learning technique aimed at optimizing the weights within the loss function during training. By adaptively adjusting these weights, our method not only enhances the model’s estimation accuracy but also increases its robustness against the instabilities associated with rapid parameter variations. <|MaskedSetence|> The theoretical results also indicate that the weight update method does not alter the neural network’s optimization objective on average..
|
**A**: Follow-the-Regularized-Leader [43, 44] is stable extension of Follow-the-Leader [45, 46] by adding a strong convex regularization term to the objective function to achieve a sublinear regret bound.
In this work, we present an innovative methodology that combines changepoints detection with PINNs to address changes and instabilities in the dynamics of PDEs.
**B**: Regularization technique is widely used in online convex optimization problems [40].
**C**: (iii) We present several theoretical results to show that our re-weighting approach minimizes the training loss function with a regularizer and demonstrates that the regret is upper bounded.
|
BAC
|
BAC
|
CBA
|
BAC
|
Selection 4
|
<|MaskedSetence|> In that case, g(U)=g(W)𝑔𝑈𝑔𝑊g(U)=g(W)italic_g ( italic_U ) = italic_g ( italic_W ), and since W𝑊Witalic_W is a waypoint on f𝑓fitalic_f, we see that
u𝑢uitalic_u has a palindrome of length 2ℓ+12ℓ12\ell+12 roman_ℓ + 1 centred at g(W)𝑔𝑊g(W)italic_g ( italic_W ). <|MaskedSetence|> <|MaskedSetence|> Thus, ⟨f,g⟩𝑓𝑔\langle f,g\rangle⟨ italic_f , italic_g ⟩ is a reflection on ⟨f,g′′⟩𝑓superscript𝑔′′\langle f,g^{\prime\prime}\rangle⟨ italic_f , italic_g start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ⟩ admissible for u𝑢uitalic_u. Moreover, we are guaranteed that
g′′superscript𝑔′′g^{\prime\prime}italic_g start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT is surjective, because otherwise uf=ug=ug′′superscript𝑢𝑓superscript𝑢𝑔superscript𝑢superscript𝑔′′u^{f}=u^{g}=u^{g^{\prime\prime}}italic_u start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT = italic_u start_POSTSUPERSCRIPT italic_g end_POSTSUPERSCRIPT = italic_u start_POSTSUPERSCRIPT italic_g start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT would have a shorter.
|
**A**: Let g′′superscript𝑔′′g^{\prime\prime}italic_g start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT be a reflection
on g𝑔gitalic_g over [U,V]𝑈𝑉[U,V][ italic_U , italic_V ], (dashed lines).
**B**: Then ug′′=ugsuperscript𝑢superscript𝑔′′superscript𝑢𝑔u^{g^{\prime\prime}}=u^{g}italic_u start_POSTSUPERSCRIPT italic_g start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT = italic_u start_POSTSUPERSCRIPT italic_g end_POSTSUPERSCRIPT.
**C**: Suppose next that g𝑔gitalic_g has a waypoint at V𝑉Vitalic_V, but nowhere else
in the interval J𝐽Jitalic_J (Fig. 9(b)).
|
CAB
|
CAB
|
ABC
|
CAB
|
Selection 2
|
Herein, let us first provide some perspectives in Section 2 on how one formulates inverse problems, and how the different philosophical approaches inform our approach. <|MaskedSetence|> Having so set the stage, in Section 5, we provide “vignettes” for three ways in which we believe information densities can be used in practice. Section 6 then explores one of these – the choice of mesh for discretizing an infinite-dimensional inverse problem – in detail and with numerical and quantitative results. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: We conclude in Section 7.
**B**: We then address the goals mentioned in the previous paragraph by first considering a finite-dimensional, linear model problem in Section 3 that we use to provide a conceptual overview of what we are trying to achieve, followed by the extension of this model problem to the infinite-dimensional case in Section 4.
**C**: Two appendices discuss the extension of our work to nonlinear problems (Appendix A) and explain the derivation of the mesh refinement criteria that we compare against the method we propose in Section 6 (Appendix B).
.
|
BAC
|
BAC
|
CAB
|
BAC
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> Based on our thinking about hierarchies mentioned above in Section 5.4, it occurred to us that we needed to subdivide the legacy annotations into categories of different orders of complexity that might be used for different downstream tasks. We unflatten the vocabularies by classifying them into three categories: ”descriptive,” ”decorative” or ”interpretive.” For this, we designed a simple labeling dashboard that presents three of the possible labels for a given image from the dataset.
We qualify descriptive examples as those that anyone, without specialized knowledge, say a child, could learn to recognize. An illumination might contain ”chauve” (bald), or an ”épée” (sword) or a ”barbe” (beard). In the case that the label is quite close to contemporary image hierarchies, it would be a somewhat straightforward task for computer vision. The case of Figure 7 ”assis” (seated) is a body posture, a type of label that specialists in iconography take great interest in and that the database Initiale uses quite often (Garnier, 1982). Whereas perhaps not as straightforward as a beard or a sword, it is nonetheless easy to recognize without expert knowledge, and existing models perhaps provide a starting point (Schlecht et al., 2011). The second category of labels is ”decorative” and largely refers to a set of very detailed labels that only a manuscript specialist would be proficient in. <|MaskedSetence|> Theoretically, someone could learn to recognize these forms as well, although the vocabulary for them is highly specialized and their prevalence in the dataset can sometimes be uncommon. The third category of labels is ”interpretive” by which we mean that some extra-visual context (usually textual context) is required to be able to label the item. Examples might include a proper name such as ”Saint Jean” where no particular iconographic element points to the figure being John. Alternatively, a label of a book of the Bible may be given to a highly generic initial form, and only its placement in the codex provides the clue. Or, as in Figure 7, ”sommeil” (sleep) is present in Genesis 2:21 and may be indicated by the posture of a human figure, but can only be understood by association with the text in mind..
|
**A**: The original labels had been designed for search and retrieval of images for forms of close viewing, rather than distant ones.
**B**: These include motifs such as the ”rinceau” (a wavy form depicting leafy stems), the ”protomé” (a partial representation of a body found at the end of a decorative motif) or, as in Figure 7 ”anthropomorphe” (a motif in the shape of a human figure).
**C**:
In our exploration of the legacy labels of the Mandragore and Initiale databases, we realized limitations for their use in working with computer vision for tasks such as object detection or image segmentation.
|
CAB
|
CAB
|
CAB
|
ABC
|
Selection 3
|
IV-B Implementation Details
We adopt our methods to fine-tune 5 different scales of PLMs, i.e., BERT-large (340M), BERT-base (110M), BERT-medium (42M), BERT-small (29M) and BERT-tiny (4.4M). The cutting-edge method P-Tuning-v2 [34] is introduced to perform basic prompt-tuning. We use AdamW optimizer to tune our models with 8 NVIDIA A100 GPUs. For each dataset, we perform a grid search for the learning rate on {5e-3, 7e-3, 1e-2}, the batch size on {16, 32, 64}, and epoch on {20, 40, 60, 80, 100}. The dropout rate is set to 0.1 and the input sequence length is 128. <|MaskedSetence|> However, in the vanilla PoT scenarios, the prompt lengths across different tasks should be equal, so that the prompt embeddings can be reused between different tasks. Hence, we follow the prior works [10, 13] and set a fixed prompt length for all tasks, which may cause the performance discrepancy between the original P-Tuning-v2 paper [34] and our re-implementations., i.e., continuous prompts with 20 token lengths are added in every layer as prefix tokens (The detailed hyper-parameters are also listed in Table I). <|MaskedSetence|> Specifically, since the SPoT [10] is the first and most representative PoT method, we use it as the main baseline method. <|MaskedSetence|>
|
**A**: For references, we compare our PanDa approach with full-parameter model-tuning, regular prompt-tuning methods (i.e., the vanilla prompt-tuning method, Lester et al. [8], and a more powerful prompt-tuning method, P-Tuning-v2 [34]) without any transfer, and vanilla PoT methods.
**B**: Moreover, we also compare our method with the more cutting-edge PoT methods, i.e., ATTEMPT [40] and SEMoE [41]..
**C**: For the prompt length, we follow prior work [10] and set it to a fixed number of 20 tokens in all cases777As stated in the P-Tuning-v2 [34], the different tasks achieve their best performance with different prompt lengths.
|
CAB
|
CAB
|
BCA
|
CAB
|
Selection 1
|
<|MaskedSetence|> When Russia invaded Ukraine on 24 February 2022, war-related attacks on the two countries were regularly reported (New York Times, 2022). A popular narrative is that the engagement of low-level cybercrime actors and volunteers could be a game changer and could undermine Russia’s war (Foreign Policy, 2022). Some commentators predicted it will be the first full-scale cyberwar (Atlantic Council, 2022), its effects will last for decades (Serpanos and Komninos, 2022), and youngsters would be drawn into a ‘cyberwar’ by joining IT Army of Ukraine – a group backed by the Ukrainian state to co-ordinate volunteers and civilians to help disrupt Russian assets (Soesanto, 2022; Mykhailo Fedorov, 2022). Some have suggested a ‘real cyberwar’, predicting hacktivist attacks on Russia would escalate further through 2022 (TASS Russian News Agency, 2022). <|MaskedSetence|> Although less likely to grab headlines, a contrary narrative around ‘overhyped cyberwar’ suggests cyber operations in the conflict have been slow (Maschmeyer and Dunn Cavelty, 2022) and insignificant (Kostyuk and Gartzke, 2022), while claims of an unprecedented level of cyberattacks and Russia’s much-vaunted cyber capabilities are questionable (Nature News Explainer, 2022; Givens et al., 2023; Willett, 2022). <|MaskedSetence|>
|
**A**: GCHQ commented the cyber conflict had not yet materialised (Financial Times, 2022) and pointed to the resilience of Ukraine’s defences (The Economist, 2022).
.
**B**:
Russia and Ukraine have a long history of electronic information warfare (Margarita Jaitner, 2015) and are among the most active cybercrime hubs (Lusthaus et al., 2020).
**C**: These narratives regularly appear in the press and play a role in shaping domestic policy responses to cybercrime.
|
BCA
|
BCA
|
CBA
|
BCA
|
Selection 4
|
In Tables 3 and 4 we present the results when ranking images for different k𝑘kitalic_k after applying the best perturbation to each image. Table 3 presents the findings for the CIFAR10 and ImageNet datasets, while Table 4 provides the results for the X-Ray and Road Sign datasets. <|MaskedSetence|> The rows detail the combinations of architectures for the victim and surrogate models.
The results underscore the proficiency of HET in consistently pinpointing the most transferable perturbation for a given sample. The significance of this capability is highlighted by the comparison to the lower bound of transferability—averaging at or below 30% across the datasets—which HET substantially elevates to an average of 70% or greater. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Each cell within these tables indicates the average transferability from 100 random images selected from the dataset, with the columns denoting the ranking methods alongside the established upper and lower bounds.
**B**: In the X-Ray and ImageNet datasets, HET achieves perfect transferability across almost all black-box scenarios (selection of architecture combinations).
.
**C**: Particularly striking is HET’s performance for the ImageNet, X-Ray, and Road Sign datasets, where it nearly mirrors the upper bound.
|
ACB
|
BAC
|
ACB
|
ACB
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> In particular, we identify four key features that can severely hinder the information extraction process via kernels. These include: i. the expressivity of the data embedding, ii. <|MaskedSetence|> global measurements and iv. noise (see Fig. 1). For each source, we derive an associated concentration bound..
|
**A**: In this section, we investigate the causes of exponential concentration for quantum kernels.
In broad terms, the exponential concentration of quantum kernels may be viewed as stemming from the fact in certain situations it can be difficult to extract information from quantum states.
**B**:
Given that exponential concentration leads to trivial data-independent models, it is important to determine when kernel values will, or will not, concentrate.
**C**: entanglement, iii.
|
ABC
|
BAC
|
BAC
|
BAC
|
Selection 3
|
<|MaskedSetence|> Specifically, our method takes advantage of the recent powerful action recognition models and mimics a human drivers’ behaviours by only utilizing the visual information. The first approach (RGB+3DN) only utilises the similar visual information that humans use to recognise actions, i.e. raw video data collected by cameras. <|MaskedSetence|> The second approach (RGB+BB+3DN) employs the same action recognition models, but uses information of vehicle bounding boxes to improve the detection performance. <|MaskedSetence|>
|
**A**: A few works utilising action recognition methods for intelligent vehicle systems have been proposed.
Nonetheless, these methods all require additional annotations to pre-process data, e.g., identifying a target vehicle in the video stream [26, 12, 10].
To address this problem, we propose a novel end-to-end framework involving two approaches for lane change recognition.
**B**: This method is tested by 3D action recognition networks including I3D networks [3], SlowFast networks [7], X3D networks [6] and their variants.
**C**: The vehicle bounding box information is embedded into each frame of the RGB video data to enhance the detection and prediction and passed to a pre-trained action recognition model..
|
BCA
|
ABC
|
ABC
|
ABC
|
Selection 3
|
of uncertainty scores. The corresponding histograms for the GCJ dataset
are shown in Figure 7. <|MaskedSetence|> <|MaskedSetence|> Between 40% to 60% of the authors
cannot be identified well. <|MaskedSetence|> induces a two-sided distribution. Some authors are well protected while
others are perfectly identifiable. We attribute this observation to the
tendency of neural networks, as used by Abuhamad et al., to not generalize in.
|
**A**: In contrast, the approach of Abuhamad et al.
**B**: The method of Caliskan et al.
**C**: leads to a one-sided distribution.
|
BCA
|
ACB
|
BCA
|
BCA
|
Selection 4
|
Comparison to State-of-the-art Methods. CoCoOp, ProGrad, KgCoOp, and PLOT are recent advancements that build upon the foundation of CoOp. When comparing SoftCPT-NATA to CoCoOp, our method demonstrates impressive average accuracy improvements of 3.19%, 6.35%, 10.93%, and 3.40% on the General-10, Plant-6, RS-8, and Fashion-20 datasets, respectively. <|MaskedSetence|> <|MaskedSetence|> Nevertheless, on the three specialized datasets, SoftCPT-NATA surpasses PLOT in terms of mean accuracy, thereby validating the effectiveness of multi-task learning in these contexts. It’s worth noting that PLOT incorporates multiple contexts through optimal transport, making it a strong baseline. <|MaskedSetence|>
|
**A**: Similarly, against ProGrad, SoftCPT-NATA achieves accuracy boosts of 0.11%, 5.22%, 5.79%, and 1.96% on the corresponding datasets.
**B**: Moreover, since SoftCPT and PLOT explore distinct optimization directions, there is potential for integrating these two techniques to further enhance performance.
.
**C**: In the case of PLOT, while SoftCPT-NATA slightly lags behind on the General-10 dataset, this is understandable given the limited benefit from multi-task learning due to the weak relationship among the tasks.
|
ACB
|
ACB
|
ACB
|
BCA
|
Selection 2
|
<|MaskedSetence|> The proposed iLCC-LCS achieves a significant performance gain on the challenging task →→\rightarrow→L7. Among the methods considered, the proposed proposed iLCC-LCS stands out as the only one that outperforms ERM. <|MaskedSetence|> In cases where the label distribution changes significantly, traditional approaches may lead to the development of uninformative features, making them less effective for domain adaptation. <|MaskedSetence|>
|
**A**:
Table IV depicts the results by different methods on challenging Terra Incognit.
**B**: In contrast, the proposed method excels at capturing and adapting to these label distribution changes, enabling accurate predictions even under such dynamic conditions.
.
**C**: This superiority is especially pronounced due to the substantial variation in label distribution across domains.
|
CBA
|
ACB
|
ACB
|
ACB
|
Selection 4
|
<|MaskedSetence|> Preallocation means that before the combinatorial auction, a non-exclusive assignment of channels to tenants is performed (i.e. a channel is potentially assigned to multiple tenants as well), and each tenant considers only the subsets of channels preallocated to it in the bidding process. In other words, the search space of the CA optimization problem is constrained. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: 1.
.
**B**: I-C The Concept of Preallocation
The method proposed in [16] overcomes this problem by applying a preallocation of channels.
**C**: The relation of the preallocated and finally allocated channel sets is depicted in Fig.
|
BCA
|
CAB
|
BCA
|
BCA
|
Selection 4
|
In this work, a novel variational autoencoder architecture TCVAE for distributional drift adaptation is introduced to model the dynamic dependencies between historical observations and future data varying with time in MTS. <|MaskedSetence|> We propose to take advantage of transforming the temporal Gaussian into a flexible distribution that breaks the limitations of distributional form for inferring the temporal conditional distribution. <|MaskedSetence|> First, TCVAE learns the distributional drift adaptation in estimating the conditional distribution of future data based on the assumption that the distributional drift frequencies of the training and testing sets are the same. The cases of different distribution frequencies often exist in some scenarios. <|MaskedSetence|> Second, we will explore its application in other real distribution-drift scenarios, such as research hot spots, stock trend forecasting, and product demand analysis..
|
**A**: Specifically, we design a temporal Hawkes attention mechanism to represent temporal factors that estimate the temporal Gaussian and a gated attention mechanism to dynamically adapt the network structure of the Transformer-based encoder and decoder.
**B**: We will consider the dynamics to improve our model in the future.
**C**: In a variety of MTS forecasting applications, TCVAE based on more robust representation regularly outperforms previous techniques.
We will consider future work from two perspectives.
|
ACB
|
ACB
|
ABC
|
ACB
|
Selection 1
|
<|MaskedSetence|> Recent research also proposes a unifying perspective that bridges the gap between VAEs and nonlinear ICA35.
In this work, we revisit the importance of introducing physics-based dynamics laws into representation learning. We find that attributes normally associated with meaningful representations naturally and directly arise from our physics-based framework, which we call Dynamics Constrained Auto-encoder (DynAE). To our knowledge, this is the first representation learning algorithm which regularizes the latent space purely based on its dynamical properties. It achieves latent space regularization solely through the inherent dynamics, setting it apart from existing dynamical VAE methods30, 29 that maximize the likelihood of probability distributions and still demand a prior knowledge of the latent variable’s probability distribution, often assuming Gaussian distributions. This innovative approach eliminates the need for any prior information about the distribution of the latent space, a requirement that is typically unmet in complex systems. <|MaskedSetence|> <|MaskedSetence|> This seemingly simple adjustment unlocks the ability to learn distinctive representations for a wide array of systems characterized by diverse probability distributions, as demonstrated in our work..
|
**A**:
In addition to autoencoder-based methods, there exists an extensive body of literature in the field of nonlinear Independent Component Analysis (ICA) that leverages the temporal structure of data31, 32, 33, 34.
**B**: Rather than focusing on regularizing the latent variable distribution, we shift our attention to regularizing the transition density distribution.
**C**: Furthermore, we tackle the inherent challenge36, 37 posed by the VAE objective by extending the concept of the sliced Wasserstein auto-encoder.
|
ACB
|
CAB
|
ACB
|
ACB
|
Selection 3
|
If we consider multiple waves beside the LoS and the reflected wave of the two-ray channel model, then we obtain the ray tracing model [73], [74], [75]. This method determines the interaction of multiple radiated waves with the environment (e.g. <|MaskedSetence|> The accuracy of the ray tracing model increases when the map is more detailed and when more electromagnetic interactions are considered (e.g. <|MaskedSetence|> <|MaskedSetence|> This model draws a straight line between both nodes, counts the number of walls and floors crossed [76], and then represents the losses due to those walls and floors as described in [77].
.
|
**A**: reflection, refraction, diffraction).
**B**: However, this also increases the computational load.
Let us discuss another deterministic channel model which has been specially devised for indoor operations, and which also requires a map of the building as well as the positions of both transceivers.
**C**: buildings, floor, walls) before arriving to the receiver’s antenna and requires a computational map of the area in which both the transmitter and the receiver operate.
|
CAB
|
ABC
|
CAB
|
CAB
|
Selection 1
|
In the case of mixtures, no further analytical steps are required at least for models that fulfill the requirements of Prop. 4 or Prop. 5. Hence, concrete expressions for lower bounds (or for the log-likelihood) can be stated with only minor effort as entropies of the most common exponential family distributions are known, or can relatively easily be obtained (also compare, e.g., Nielsen and Nock, 2010). More efforts may be required for less conventional mixtures or heterogeneous mixtures but applications of Theorem 1 or 2 will presumably be similar to the application to standard mixtures as treated here.
Theorem 1, Theorem 2 and further generalizations. If a generative model consists of one set of latents and one set of observed variables (see Def. A), Theorem 1 will apply to most generative models. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Theorem 1 will, therefore, be the relevant result in presumably most cases..
**B**: The reason is that a majority of distributions
is of the exponential family with constant base measure.
**C**: Only a minority of distributions has non-constant base measure (e.g., Poisson).
|
CAB
|
BCA
|
BCA
|
BCA
|
Selection 2
|
There are works incorporating counterfactual inference in recommendation. <|MaskedSetence|> Counterfactual inference is also used to addresses the clickbait issue, which estimates the direct effect of exposure features in the prediction and removes it from recommendation scores [41].
Besides, some works learns causal embeddings to get unbiased representations for users and items. <|MaskedSetence|> <|MaskedSetence|> However, obtaining such uniform data needs to randomly expose items to users, which may hurt user experience and the data is usually of a small scale which makes the learning less stable.
.
|
**A**: By separating user and item embeddings for interest and conformity respectively, each embedding can capture only one cause by training with cause-specific data [42].
**B**: [12, 40] uses multi-task learning to estimate the contribution of each cause and performs counterfactual inference to remove the effect of item popularity during testing.
**C**: And bias-free uniform data can be used to guide the model to learn unbiased embedding, forcing the model to discard item popularity [43].
|
BAC
|
BAC
|
BAC
|
BCA
|
Selection 3
|
The quality-aware feature extractor consists of two stages including the multi-scale feature generator as well as the feature disentanglement.
1) Multi-scale Feature Generator. <|MaskedSetence|> We provide the structural details of our multi-scale feature generator in Fig. 4. As shown in the figure, the input of the feature generator is the image patch cropped from the whole image and the feature generator contains five stages at different scales. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Each stage of the generator consists of several convolutional layers and one max-pooling layer.
**B**: The processing of stage t𝑡titalic_t can be formulated as follows,.
**C**: The multi-scale feature generator is constructed based upon the principle that quality-aware features can be well established by the statistics of deep representations at different scales [62, 63, 64, 65].
|
CAB
|
CAB
|
CAB
|
ABC
|
Selection 3
|
<|MaskedSetence|> In this section, we formalize and extend this discussion: we propose a list of properties desirable for a good homophily measure. Our analysis is motivated by recent studies of clustering and classification performance measures [8, 9], but not all their properties can be transferred to homophily measures. <|MaskedSetence|> For the same reason, the distance property (requiring a measure to be linearly transformed to a metric distance) cannot be defined. <|MaskedSetence|>
|
**A**: On the other hand, some of our properties are novel.
Maximal agreement
.
**B**: Above, we discussed some disadvantages of existing homophily measures.
**C**: For instance, we do not require symmetry — a property that a measure does not change when we swap the compared objects — since homophily compares entities of different nature (a graph and a labeling).
|
BCA
|
BCA
|
ABC
|
BCA
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> H.Z. would like to thank the American Institute of Mathematics and the AIM workshop Analysis on the hypercube with applications to quantum computing. He is also grateful to the organizers and other participants for creating an active atmosphere. The research of C.R. <|MaskedSetence|> C.R. acknowledges the support of the Munich Center for
Quantum Sciences and Technology, as well as the Humboldt Foundation. C.R. would like to thank Amanda Young for fruitful discussion on the applications of Friedgut’s Junta theorem to learning quantum dynamics. The research of M.W. was funded by the Austrian Science Fund (FWF)
under the Esprit Programme [ESP 156]. For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
The authors want to thank Francisco Escudero Gutierrez and Hsin-Yuan Huang for helpful comments on an earlier version of the paper. They are grateful to the referees for the careful reading and helpful comments..
|
**A**: has been supported by ANR project QTraj (ANR-20-CE40-0024-01) of the French
National Research Agency (ANR).
**B**: H.Z.
**C**: is supported by the Lise Meitner fellowship, Austrian Science Fund (FWF) M3337.
|
ACB
|
BCA
|
BCA
|
BCA
|
Selection 4
|
Fig. 1 depicts the typical, key components in multiprocessor secure boot. <|MaskedSetence|> <|MaskedSetence|> The two processors are connected – via a shared bus – to a (mutable) NVM and a shared memory. In addition to the shared bus, the BSP and the AP can also communicate with each other through a dedicated channel for exchanging inter-processor interrupts (IPIs).
Figure 1. Key components in multiprocessor secure boot. <|MaskedSetence|>
|
**A**:
.
**B**: Each of them has a burned-on-chip private key serving as its hardware-specific identifier.
**C**: Without loss of generality, we assume throughout the paper that the motherboard of a multiprocessor device is equipped with two processors on physically separated chip sockets – one acts as BSP and the other as AP.
|
CBA
|
CBA
|
ACB
|
CBA
|
Selection 1
|
In this paper, a general notion of dissipativity with dynamic supply rates was introduced for nonlinear systems, extending the notion of classical dissipativity. <|MaskedSetence|> In these results, both dynamical systems are characterised by compatible dissipation inequalities with respect to “coupled”
dynamic supply rates. Satisfaction of the dissipation inequalities is aided by the dynamics of possibly distinct auxiliary systems. <|MaskedSetence|> <|MaskedSetence|> This coupling test is simple to compute if the supply rate operators are chosen to be LTI. Moreover, a meaningful comparison with the integral quadratic constraint based input-output approach to feedback stability was.
|
**A**: The results were shown to recover several knowns results in the
literature.
**B**: A noteworthy specialisation of the results is a simple coupling test to verify whether the feedback interconnection of two nonlinear systems, each satisfying independent (Ψ,Π,Υ,Ω)ΨΠΥΩ(\Psi,\Pi,\Upsilon,\Omega)( roman_Ψ , roman_Π , roman_Υ , roman_Ω )-dissipation inequalities, is asymptotically stable.
**C**: Lyapunov and asymptotic stability analyses were performed for feedback interconnections of two
dissipative systems satisfying dissipativity with respect to dynamic supply rates.
|
CAB
|
CAB
|
CAB
|
ACB
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> First, we propose an almost sure reciprocal control barrier function (AS-RCBF) ensuring the safety of a set with probability one, which is considered as a stochastic version of an extended RCBF in [5]; see also [4] (and note that the condition is relaxed around the boundary of the safe set compared with an RCBF in [1]). <|MaskedSetence|> Then, we suggest a new stochastic ZCBF for calculating a probability that a trajectory achieves a designed subset of a safe set before leaving the safe set. Our stochastic ZCBF satisfies an inequality, which differs from the previous results in [9, 10, 11, 12, 13, 14, 15] because the inequality directly includes the diffusion coefficients. In the procedure, we also provide control design strategies using AS-RCBF/AS-ZCBF and our stochastic ZCBF. In addition, we demonstrate our stochastic ZCBF is available for stochastic systems including input constraints by simple examples.
.
|
**A**: Second, we propose an almost sure zeroing control barrier function (AS-ZCBF) satisfying an inequality somewhat different from the one in [12].
**B**:
In this paper, we propose a way of analyzing safety probability for a stochastic system via a CBF approach.
**C**: The contributions of this paper are as follows.
|
BAC
|
BCA
|
BCA
|
BCA
|
Selection 4
|
The notion of legibility has also been applied to scenarios of planning under uncertainty. Miura et al. <|MaskedSetence|> In L-MDP, the agent focus on choosing, at each time step, the most optimal action that also maximizes the information transmitted to an observer about the agent’s goal. To accomplish such optimal and legible behaviour, the agent reasons about the observer’s belief of the agent’s objective given the history of the observed agent actions. The observer’s belief regarding the agent’s intentions is modelled using the multiagent framework of interactive POMDPs [13]. By reasoning about the observer’s belief – using this reasoning to drive the planning algorithm – the agent can derive a legible policy that best disambiguates the agent’s goal [43]. <|MaskedSetence|> <|MaskedSetence|> However, the nature of L-MDP makes its planning complexity similar to that of POMDP, limiting its applications to small scale state spaces as the planning can become intractable in large scale state spaces.
.
|
**A**: [12] present a formulation of legibility for MDPs, named legible Markov decision problem (L-MDP).
**B**: The results of a user study, conducted by the authors, showed that the resulting legible policies are capable of better transmitting the agent’s intentions than using standard optimal policies.
**C**: The legible policy is obtained by iteratively updating the assumed observer’s belief and with the updated belief simulate the possible actions – using a method like UCT [44] – to find the one that best disambiguates the agent’s goal.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> We demonstrated that replacing GFL converters with GFM converters is equivalent to enhancing the power grid strength, characterized by the so-called generalized short-circuit ratio (gSCR). However, the approach [9] can only be used to determine the optimal locations to replace GFL converters with GFM converters, but it still remains unclear how to configure newly installed GFM converters in the grid and more importantly, how to decide their capacities (or equivalently, how many GFM converters we will need) to ensure the system’s small signal stability. Furthermore, the analysis in [9] only considers one type of GFM control (i.e., VSM) and directly approximates a VSM as an ideal voltage source (without deriving the equivalent impedance as will be done in this paper). <|MaskedSetence|> In this case, though the terminal voltage magnitude is well regulated, it remains unclear if the GFL converters can be considered as effective voltage sources to enhance the power grid (voltage) strength. We believe that it is essential to answer the above question before studying how many GFM converters we will need to enhance the power grid strength, as one may simply resort to modifying GFL converters to enable voltage source behaviors if they can be used to enhance the power grid strength.
.
|
**A**: Intuitively, since GFM converters behave like voltage sources, installing a GFM converter near a GFL converter should improve the local power grid strength of the GFL converter and thus improve its small signal stability margin (as GFL converters may become unstable in weak grids).
**B**: This intuition was confirmed in our previous work [9], where we investigated the impact of GFM converters on the small signal stability of power systems integrated with GFL converters.
**C**: Such an approach might not apply to other GFM methods once they have weaker voltage source behaviors than VSMs in [9], as it remains unclear how to quantify the voltage source behaviors of different GFM methods and analyze their interaction with GFL converters.
Moreover, one important question is: since GFL converters can perform constant AC voltage magnitude control, do they also have effective voltage source behaviors to enhance the power grid strength? To be specific, one can introduce the terminal voltage magnitude as a feedback signal to generate the reactive current reference and regulate the voltage magnitude to a reference value [3, 4].
|
BCA
|
ABC
|
ABC
|
ABC
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> On the other hand, the inter-class distance is still increasing even at a later stage. The Jacobian norm difference rises more gradually, and the rate of increase becomes large at the last stage. <|MaskedSetence|> These observations show that the known and unknown class representations are separated as the model makes their Jacobian norm different. Still, the Jacobian norm is not the only factor contributing to their separation.
Figure 6:
We measure the detection performance (in AUC), the discriminative quality of known classes (in DBI), and the averaged Jacobian norm difference for a single model during different training iterations, indicating a strong correlation between these metrics..
|
**A**: The separation between the known and unknown also increases largely at the early stage but continues to improve even later in training.
**B**: Specifically, the intra/inter ratio is stable at the early stage of training.
**C**: Although the global trend has a simple correspondence between these metrics, a more careful look at the graphs of Fig. 5 shows that the metrics involve different phases during training.
|
CBA
|
CBA
|
CBA
|
CAB
|
Selection 3
|
<|MaskedSetence|> The other six models were used as black-box targets for evaluating the performance of these examples. <|MaskedSetence|> The rightmost bars shows that ANDA, MultiANDA generated 840, 863/1000 adversaries that fooled all six targeted models, respectively, while 517/1000 are for VMI-FGSM. Naturally, the remaining sets of bars for this baseline method are higher, meaning that it deceived fewer targeted models.
Moreover, we visualized the generated examples and the perturbations by these three methods (see Figure 1). The results show that the perturbations crafted by ANDA and MultiANDA focus more on the semantic areas of objects than VMI-FGSM, which dominates the decision of the prediction model. <|MaskedSetence|>
|
**A**: We further investigated the adversarial examples generated with ResNet-50, shown in Figure 3.
**B**: More visualization results are provided in Appendix.
Figure 3: Most examples crafted by our proposed methods successfully deceived all 6 black-box models..
**C**: A higher number of successfully deceived target models implies more capable adversaries and therefore stronger attacks.
|
ACB
|
ACB
|
ABC
|
ACB
|
Selection 4
|
Having established the power of D-VAE, we lastly illustrate the importance of the extended Kalman filter (EKF). Using our proposed EKF for the real-time estimation of the flying objects (see section III-C for details), we realize a precise position estimation of the objects.
With new target locations, the target node in the graph is also updated for better interception. We observed that without EKF, the trajectory tracking deviation gradually accumulated, which finally led to a huge deviation and possibly caused the failure of intercepting the flying target objects. On the contrary, the EKF can modify the tracking trajectory by executing the real-time estimation of target objects, which successfully tracks the oracle moving trajectory with smallest tracking errors.
To further confirm our observations, we run comprehensive experiments and also compare our EKF with other linear and polynomial estimation algorithms such as linear/polynomial regression, and nonlinear estimation algorithms such as the B-spline algorithm. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Therefore, by integrating EKF, our method achieved the higher success ratio 98.6%percent98.698.6\%98.6 %, which significantly outperformed other algorithms OLSReg: 15.6%percent15.615.6\%15.6 %; PloyReg: 52.3%percent52.352.3\%52.3 %; B-Spline: 71.8%percent71.871.8\%71.8 %. The above ablation study confirmed that the real-time nonlinear and recursive estimation of the target moving object is necessary for improving the final successful ratio.
.
|
**A**: First, we observed that using static estimation algorithms causes large estimation errors as they only use current position information to predict.
**B**: In detail, our EKF achieved cumulatively 0.670.670.670.67 estimation errors, which outperformed other algorithms OLSReg (Ordinary Least Squares Linear Regression): 6.376.376.376.37; PloyReg (Polynomial Regression): 4.254.254.254.25; B-Spline: 2.812.812.812.81.
**C**: On the contrary, the EKF has nonlinear and recursive mechanisms, which utilize both current and past position information for the prediction of future object locations.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 3
|
The iterative form of Eq. (S15) and the expression Eq. (S18) shows that we can construct all required φ(a~k,s)𝜑subscript~𝑎𝑘𝑠\varphi(\tilde{a}_{k},s)italic_φ ( over~ start_ARG italic_a end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_s ) iteratively by “sweeping” from left to right (obtaining ℰkleftsubscriptsuperscriptℰleft𝑘\mathcal{E}^{\text{left}}_{k}caligraphic_E start_POSTSUPERSCRIPT left end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT for k=1,2,3,…,L)k=1,2,3,...,L)italic_k = 1 , 2 , 3 , … , italic_L ) and then from right to left (obtaining Mkrightsubscriptsuperscript𝑀right𝑘M^{\text{right}}_{k}italic_M start_POSTSUPERSCRIPT right end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT for k=L,L−1,…,1𝑘𝐿𝐿1…1k=L,L-1,...,1italic_k = italic_L , italic_L - 1 , … , 1). These sweeps can be implemented efficiently with the use of a scan function, just as with vψ(S)subscript𝑣𝜓𝑆v_{\psi}(S)italic_v start_POSTSUBSCRIPT italic_ψ end_POSTSUBSCRIPT ( italic_S ) [c.f. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> a single scan function), here an additional sweep to the left is required..
|
**A**: (S5)].
**B**: Eq.
**C**: However, while for vψ(S)subscript𝑣𝜓𝑆v_{\psi}(S)italic_v start_POSTSUBSCRIPT italic_ψ end_POSTSUBSCRIPT ( italic_S ) just one sweep to the right was required (i.e.
|
BAC
|
BAC
|
BAC
|
ABC
|
Selection 1
|
<|MaskedSetence|> As reported in [3], the experience with LiquidFeedback reveals that the behavior of agents changes over time due to their observations of other agents’ actions, which is essential for the development of decision-making processes. <|MaskedSetence|> This leads us to ask whether this process can reach an equilibrium state, then in the second part of this work, we introduce a game-theoretic model where agents serve as players and their delegation preferences represent their strategies. <|MaskedSetence|> We demonstrate the existence of Nash equilibria in this game under the condition of non-zero penalties imposed on extended delegation chains. Furthermore, we present the main result of this work, where we establish the possibility of defining a voting power measure such that the fundamental properties of classic delegation are upheld within an acceptable margin of error, while simultaneously attaining pure Nash equilibria for the examined game.
To the best of our knowledge, this study offers the first formal proof establishing the feasibility of a liquid democracy system that guarantees the existence of equilibrium states.
.
|
**A**: Each agent i𝑖iitalic_i is endowed with a utility function that measures the amount of their own vote attributed to a specific player j𝑗jitalic_j, weighted by a satisfaction factor which accounts for i𝑖iitalic_i’s contentment with the proportion of their voting power ceded to j𝑗jitalic_j.
**B**: The temporal variable is a crucial factor to consider in our analysis.
**C**: Therefore, we assume that an agent’s delegation preference can be represented as a function that evolves over time and depends on real-time feedback received by the agent.
|
BCA
|
ABC
|
BCA
|
BCA
|
Selection 3
|
Study on block structures. Within MAB, we choose the emerging metaformer style rather than the RCAN-style structure to deploy MLKA. <|MaskedSetence|> The experimental results indicate that the transformer-style MAB surpasses the RCAN-style one by a large margin. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: On Set5 [2], the PSNR is increased from 32.15 dB to 32.33 dB by employing the transformer structure.
**B**: To fully explore their effectiveness, we implement and compare two versions of MABs in the Tab. 3.
**C**: The results show the transformer-style MAB is more efficient in balancing the performance and computations.
.
|
BAC
|
BAC
|
ACB
|
BAC
|
Selection 2
|
<|MaskedSetence|> (2018; 2019) introduce the gradient flow dynamics framework to analyze the dynamics of multi-layer linear neural networks under the ℓ2subscriptℓ2\ell_{2}roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-loss and find deeper neural networks biasing towards low-rank solution during optimization. <|MaskedSetence|> <|MaskedSetence|> Differently, we focus on federated learning with the cross-entropy loss. More importantly, our analysis focuses on dimensional collapse caused by data heterogeneity in federated learning instead of depth of neural networks.
Feature Decorrelation..
|
**A**: (2021) finds two factors that cause dimensional collapse in self-supervised learning, namely strong data augmentation and implicit regularization from depth.
**B**: Gradient Flow Dynamics.
Arora et al.
**C**: Following their works, Jing et al.
|
BCA
|
ABC
|
BCA
|
BCA
|
Selection 1
|
<|MaskedSetence|> This method ensures that the test split maintains a balanced representation of samples for each class, preserving the proportionality of class distributions in each train-test split. Additionally, 10%percent1010\%10 % of the training partition of each fold was reserved for validation and hyperparameter tuning. <|MaskedSetence|> Subsequently, we repeated the training of each model on all 5555 folds based on the best-performing hyperparameters of the initial random fold. <|MaskedSetence|>
|
**A**: We initially selected random hyperparameter values for the training of each model on a random fold (out of 5555 folds).
**B**:
In the model training process, we adopted a Stratified 5-Folds cross-validation strategy.
**C**: Finally, the trained models (on all 5555 folds) were tested on the corresponding test split.
.
|
BAC
|
CBA
|
BAC
|
BAC
|
Selection 4
|
2.3 Logic Rule Induction Methods
The heuristic technology is calculated according to the characteristics of the graph structure, and the key is to calculate the similarity calculation score for the neighborhood of the two target nodes nowell2003link ; lu2011link . <|MaskedSetence|> Zhang et al. zhang2017weisfeiler proposed Weisfeiler-Lehman Neural Machine(WLNM) to extract a subgraph for each target link domain as an adjacency matrix, and train a fully connected neural network on these adjacency matrices to learn a link prediction model. <|MaskedSetence|> However, the increase in the number of grid hops means higher computational costs and memory consumption. <|MaskedSetence|>
|
**A**: Studies have shown that higher-order heuristics such as rooted PageRank brin2012reprint and Simrank jeh2002simrank have better performance than lower-order heuristics adamic2003friends ; zhou2009predicting .
**B**: Moreover, the lack of heuristics has poor applicability to different types of networks lu2011link ; klein1993resistance , which means complex computation is required to find the appropriate heuristic for different networks.
.
**C**: As shown in Figure 1, according to the maximum number of neighbor hops used in the calculation process, the heuristic methods can be divided into three groups, including first-order, second-order and higher-order heuristics.
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 4
|
<|MaskedSetence|> When drawing their solutions as intervals (top) became awkward, they eventually resorted to triangles (bottom). The purple triangle is the optimal wrapping when the last straight section of the ruler can be as short as or shorter than the previous one, and the yellow triangle is the optimal wrapping when it must be longer.
We consider rolling rulers into rectangles instead of triangles because if the last straight section of the ruler must be longer than the third to last — analogously to Gagie et al.’s assumption — then Ruler Rolling is equivalent to partitioning a string of positive integers into substrings such that the sums of the even substrings are increasing, and the sums of the odd substrings are increasing. <|MaskedSetence|> <|MaskedSetence|> The running time drops back to quadratic, however, if we have a scalar objective function that can be computed in constant time and respects Pareto optimality in the sense that it assigns the same score to rollings with the same dimensions and assigns a better score to one rolling than to another if the former is shorter and not wider or thinner and not taller than the latter. Intuitively, this is because the objective function projects all the solutions onto a line, and then we can find the minimum in time linear in the number of solutions and quadratic in the number of segments in the ruler..
|
**A**: (We do not know of quite such a nice equivalence in the case of triangular rollings.) We give a simple online dynamic-programming algorithm that reports all the Pareto-optimal rollings in quadratic time under this assumption.
**B**: Our algorithm still works without the assumption, but then it is not online and we are left with a quadratic number of feasible two-dimensional solutions, so finding the Pareto-optimal ones and discarding the others increases our running time by a logarithmic factor.
**C**:
Figure 1: Gagie et al.’s [3] Figures 3 and 4.
|
CAB
|
CAB
|
CAB
|
BAC
|
Selection 3
|
<|MaskedSetence|> 4.1] with a more precise characterization of the ±plus-or-minus\pm±1 error. <|MaskedSetence|> Moreover, Lemma 2 presents a special case of Lemma 1, i.e., when errors do occur. <|MaskedSetence|> The proofs of Lemma 1 and Lemma 2 are presented in App. C.
.
|
**A**:
Lemma 1 proves the corresponding statement proposed in [26, Sect.
**B**: We show the possible truncation error is +11+1+ 1 or −11-1- 1 for a positive or a negative input respectively, while [26, Theorem 1] simply states the existence of potential ±plus-or-minus\pm±1 error.
**C**: As we will show later, our sign determination protocol benefits from a better understanding of how and when the ±plus-or-minus\pm±1 error occurs.
|
BCA
|
ABC
|
ABC
|
ABC
|
Selection 2
|
<|MaskedSetence|> (2021). Results are shown in Table 3. Both the student and teacher models exceed the reported results for a T5-base model that was fine-tuned on 11,290 in-domain examples of the decontextualization task. <|MaskedSetence|> <|MaskedSetence|>
|
**A**:
4.4 Decontextualizing markup
To measure the accuracy of the decontextualizing markup, we apply the prompt-based teacher and the fine-tuned student models to a manually decontextualized dataset, in which references are replaced inline rather than annotated with markup Choi et al.
**B**: Our models produce a different style of decontextualization from the test data, so it is possible that these results could be further improved..
**C**: This shows that it is possible to learn to perform the task reasonably well from just five labeled examples, and that distillation improves performance further.
|
ACB
|
BCA
|
ACB
|
ACB
|
Selection 4
|
The manuscript is structured around the building blocks needed to achieve this result. <|MaskedSetence|> <|MaskedSetence|> In Section IV, we prove a coding theorem for this information-theoretic problem. <|MaskedSetence|> Finally, these techniques will be combined to obtain a threshold-type coding theorem for fault-tolerant entanglement-assisted capacity in Section VI.
.
|
**A**: One important facet of communication with entanglement-assistance in our scenario comes in the form of noise affecting the entangled resource states, for which we introduce a scheme of fault-tolerant entanglement distillation in Section V.
**B**: In Section II, we briefly review concepts from fault-tolerance of quantum circuits used for communication.
**C**: In Section III, we outline how the fault-tolerant communication setup can be reduced to an information-theoretic problem which generalizes the usual, faultless entanglement-assisted capacity.
|
BCA
|
ACB
|
BCA
|
BCA
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> This is because MonoAug trains the surrogate language model using the entire training set, leading to a high degree of similarity between the original and augmentation instances. <|MaskedSetence|> Additionally, the variant without the perplexity filtering strategy performs the worst, indicating that the perplexity filtering strategy is crucial in removing instances with syntactical and grammatical errors. The performance of the variants without the predicted label constraint and confidence ranking is similar, with the label constraint helping to prevent the mutation of features into an adverse meaning and the confidence ranking helping to eliminate out-of-domain words and reduce feature space shift.
.
|
**A**: The results, which can be found in Table 4, show that the performance of the variant MonoAug is significantly lower than that of BoostAug.
**B**:
To gain a deeper understanding of the working mechanism of BoostAug, we conduct experiments to evaluate the effectiveness of cross-boosting, predicted label constraint, confidence ranking, and perplexity filtering.
**C**: This data overlapping problem, as discussed in Section 2.1, results in biased instance filtering and overfitting of the instances to the training fold data distribution.
|
BAC
|
BAC
|
CAB
|
BAC
|
Selection 4
|
6.3 Observations
Results are shown in Figure 3. <|MaskedSetence|> <|MaskedSetence|> Plots for other datasets are shown in (b), (c), (d) and (e); these have been miniaturized to fit the page. <|MaskedSetence|> The p-value of the Friedman test is reported, p=3.5025×10−8𝑝3.5025superscript108p=3.5025\times 10^{-8}italic_p = 3.5025 × 10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT. Here too, we do not consider the worst performing candidate, FCNN1 - so as to not bias the Friedman test in our favor..
|
**A**: Figure 3(f) shows the mean rank (lower is better) across datasets and number of prototypes (as described in Section 4, trials are aggregated over).
**B**: (a) shows the plot for the adult dataset.
**C**: The number of prototypes are shown on the x-axis as percentages of the training data.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> NEWS-COPY contains two types of data: data for training and four full day exhaustively labeled evaluation samples, constructed with two consecutive days of content in 1930 and single days in 1955 and 1974, selected at random. The 1955 sample is a validation set used to select hyperparemters for both the N𝑁Nitalic_N-gram and neural methods. 1930 and 1974 are pooled to form the test set and used only to produce the results shown in this paper. <|MaskedSetence|>
|
**A**:
3.2 Description of the NEWS-COPY Dataset
Table 1 summarizes the key features of the NEWS-COPY dataset.
**B**: It consists of 27,210 articles, drawn from 973 newspapers between 1920 and 1977.111A copyright law change effective January 1, 1978 resulted in nearly all newspapers from that date forward being under copyright by default.
**C**: In the full day samples, there are far more negative than positive pairs, as is generally the case in de-duplication problems, whereas the training data contain a more balanced sample.
.
|
ABC
|
BAC
|
ABC
|
ABC
|
Selection 4
|
We first demonstrate that iFAST obtains comparable performance on service quality comparing to SpotDataD, while outperforming other benchmarks. Then, we show that iFAST achieves such performance under a significantly lower time overhead.
In Fig. <|MaskedSetence|> Nevertheless, SpotDataD is prone to significant long-term latency due to its reliance on onsite analysis/communications. Moreover, iFAST surpasses other benchmark methods, a testament to the efficacy of our designed overbooking and risk control strategies.
Fig. 5 compares the average running time over 300 transactions, employing log-based representation to highlight the gaps. <|MaskedSetence|> <|MaskedSetence|> This escalation is attributed to the latency required for each buyer-seller pair to conclude a trading decision, underscoring the efficiency challenges as market participant numbers rise..
|
**A**: Conversely, the running times for SpotDataD and ImproveIE exhibit a marked increase with the growth of sellers and buyers.
**B**: 4, we observe that the average service quality (per transaction) for buyers using our iFAST is marginally lower than that of SpotDataD, since SpotDataD’s primary objective is to identify the optimal solution by analyzing the current network/market conditions, whereas iFAST confronts uncertainties and associated risks.
**C**: Notably, iFAST demonstrates a significantly reduced running time, benefiting from the implementation of pre-signed forward contracts, which eliminate the need for buyers and sellers to negotiate trading agreements during each transaction.
|
BCA
|
BCA
|
ABC
|
BCA
|
Selection 1
|
Currently, there is a lack of an open platform for different EO tasks. For deep learning methods, backbone networks, hyper-parameters, and training tricks are influential factors that should be considered for fair performance comparison. However, existing works usually evaluate the performance with different dataset splits, which makes it difficult to fairly and reliably compare different algorithms. <|MaskedSetence|> As a result, many cutting-edge and off-the-shelf deep learning methods from the machine learning community are not evaluated and compared on RS data.
To tackle the previously mentioned issues, in this study, we first make an exhaustive and comprehensive review of the publicly accessible RS datasets. Next, a systematic analysis is undertaken based on the information about these datasets. <|MaskedSetence|> To further enable a fair and reproducible comparison of different algorithms, we construct a new deep learning platform, EarthNets, as a foundation for future work. <|MaskedSetence|>
|
**A**: Based on the attribute information, we filter, rank, and select five large-scale datasets designed for general purposes to build a new benchmark for model evaluation.
**B**: Due to the large variance in data collection sensors and pre-processing pipelines, it is non-trivial to adapt modern deep learning models to RS datasets [28].
**C**: Our main contributions are summarized below..
|
ABC
|
BAC
|
BAC
|
BAC
|
Selection 3
|
A. Buluç were supported by the Office of Science of the DOE
under Award Number DE-AC02-05CH11231. <|MaskedSetence|> <|MaskedSetence|> DOE
under Contract No. DE-AC02-05CH11231. <|MaskedSetence|> Murray was also funded by an NSF Collaborative Research Framework under NSF Grant Nos. 2004235 and 2004763. This research used resources.
|
**A**: R.
**B**: R.
**C**: Murray was supported
by Laboratory Directed Research and Development (LDRD) funding
from Berkeley Lab, provided by the Director, Office of Science, of the U.S.
|
CAB
|
BCA
|
BCA
|
BCA
|
Selection 4
|
<|MaskedSetence|> In Section 3, we propose a numerical framework that leverage the multipole expansions to accelerate the simulation of crystalline defects. <|MaskedSetence|> In Section 4, we apply our main algorithm (cf. <|MaskedSetence|> Section 5 presents a summary of our work and future directions. The proofs as well as additional analysis that can aid in understanding the main idea of this paper, are given in Section 6..
|
**A**: Algorithm 3.3) to various prototypical numerical examples of point defects.
**B**:
Outline
The paper is organized as follows: In Section 2, we provide an overview of the variational formulation for the equilibration of crystalline defects and review the result on multipole expansions (cf. [3, Theorem 3.1]) that provides a general characterization of the discrete elastic far-fields surrounding point defects.
**C**: We utilize continuous multipole expansions instead of discrete ones to obtain an efficient implementation.
|
ABC
|
BCA
|
BCA
|
BCA
|
Selection 3
|
In practice, real-time reconfigurability in the range of milliseconds might be still difficult to achieve as it requires stringent timing requirements for the control channel. Alternatively, beam-hopping techniques that are popular in satellite communications [34] can be considered. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Therefore, the reconfiguration needs to be done only occasionally with long cycle times and the requirements on the control channel are significantly relaxed. To allow for initial access, all potential beam directions are sequentially illuminated and scanned (beam sweeping) during multiple synchronization signal blocks (SSB). This results in substantial initial access latency and a long beam-hopping period. Therefore, the RIS node is designed to support a medium number of wide initial access wide beams or, alternatively, a permanent directive link is dedicated between the access point and the RIS node. While the control overhead is reduced, synchronous operation (for instance via GPS) between the RIS nodes and the donor nodes is still required. A notable advantage of the redirective RIS system is the simultaneous beam hopping of multiple beams at full aperture gain, particularly when the RIS node is shared among several donor sites (e.g. Fig 2) as explained in the next subsection.
.
|
**A**: Section IV-A).
**B**: The periodic beam hopping time plan can be determined and updated based on the varying traffic demand and the RIS scattering pattern can be optimized based on long-term statistical channel information [35] which also reduces the training overhead (c.f.
**C**: Beam-hopping consists of serving sequentially users spots in turn according to a predetermined schedule.
|
CBA
|
CBA
|
CBA
|
BAC
|
Selection 1
|
<|MaskedSetence|> Let us take a closer look at the behavior of temperature here. In the previous work Liang et al. (2018), it is suggested to take a sufficiently larger value of temperature. However, from the proof of Theorem 3.1, it can be seen that temperature just needs to be greater than 1.
We choose temperatures at different scales to test the effect (of temperature) on different data sets and the results are shown in Figure 3. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: 6.4. Analysis On Temperature Scaling
In Theorem 3.1, we demonstrate that temperature scaling can help differentiate the distribution between IND and OOD.
**B**: As T becomes larger, the benefits brought by T will soon become smaller, which is in line with our expectations.
.
**C**: We find that after T>1𝑇1T>1italic_T > 1 (without a large value), the effect of OOD detection is very significant.
|
ACB
|
ACB
|
CAB
|
ACB
|
Selection 2
|
Characterizing the MID for Rayleigh-product channels is very challenging due to several reasons. On the one hand, compared with the MID analysis for the (single) Rayleigh channel, setting up a CLT for the MID of Rayleigh-product channels needs to handle the fluctuations induced by two independent random matrices instead of only one. <|MaskedSetence|> <|MaskedSetence|> However, as demonstrated in [33, 31], the evaluation of the characteristic function and the asymptotic variance of the MID are much involved and rely on the complex computation of the trace of the resolvent. Furthermore, the high SNR analysis is also more challenging for Rayleigh-product channels than that for the Rayleigh case since the key parameter is determined by a cubic equation instead of a quadratic equation [29, 31]. <|MaskedSetence|>
|
**A**: On the other hand, compared with the MI analysis for two-hop channels [31], the MID expression has an additional term related to the coding scheme such that the covariance between the MI and the additional term, and the variance of the additional term need to be evaluated.
.
**B**: A classical approach of setting up a CLT is to show the convergence of the characteristic function for the concerned statistic to that of the Gaussian distribution [34].
**C**: The difficulty has been shown in [33] when extending the CLT for the linear spectral statistics of a single random matrix [34] to that of the product of random matrices.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 2
|
<|MaskedSetence|> (2019) and Google Research Football Kurach et al. (2020). Additionally, we validate RQ.4 through experiments in the Learning to Rank (LTR) scenario Liu and others (2009). <|MaskedSetence|> <|MaskedSetence|> Furthermore, to assess algorithm performance after knowledge distillation, we utilize QMIX and QPLEX as decentralized execution baselines. All experiments are conducted using the PyMARL2 framework Hu et al. (2021) with 8 parallel runners and 3 random seeds. Details regarding hyperparameters are available in Table 7 in the Appendix..
|
**A**: To our best knowledge, this is the first time that the MARL algorithm has been applied to LTR tasks.
For baselines, we categorize them into two classes: centralized execution algorithms and decentralized execution algorithms, outlined in Table 1.
**B**:
We investigate the research questions using popular MARL testbeds, namely StarCraft II Samvelyan et al.
**C**: To showcase the impact of agent-personalized global information, we contrast our approach with two centralized execution algorithms, CSRL and COPA, and perform an ablation experiment using QMIX_GIU.
|
BAC
|
BAC
|
BAC
|
ACB
|
Selection 2
|
<|MaskedSetence|> As XR-aided teleoperation task relies on both parallel and consecutive communication links, how to guarantee the cooperation among these communication links to execute the task is of vital importance. Specifically, the parallel visual and haptic feedback transmissions should be aligned with each other when arriving at the manipulator, and consecutive C&C and feedback transmissions should be within the motion-to-photon delay constraint, which is defined as the delay between the movement of the user’s head and the change of the VR device’s display reflecting the user’s movement. <|MaskedSetence|> Therefore, both parallel alignment and consecutive latency should be quantified into effectiveness-aware performance metrics to guarantee the success of XR-aided teleoperation. <|MaskedSetence|> Hence, how to alleviate the accumulated error remains an important challenge that needs to be solved.
.
|
**A**: Either violation of alignment in parallel links or latency constraint in consecutive links will lead to a break in presence (BIP) and cybersickness.
**B**:
III-B1 XR-aided Teleoperation
To implement a closed-loop XR-aided teleoperation system, the wireless network is required to support mixed types of data traffic, which includes control and command (C&C) transmission, haptic information feedback transmission, and rendered 360∘superscript360360^{\circ}360 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT video feedback transmission [14].
**C**: Moreover, due to the motion-to-photon delay, the control error between the expected trajectory and the actual trajectory will accumulate along with the time, which may lead to task failure.
|
BAC
|
BAC
|
BAC
|
BAC
|
Selection 4
|
To address challenges associated with power flow nonlinearities, we employ a linear approximation of the power flow equations that is adaptive (i.e., tailored to a specific system and a range of load variability) and conservative (i.e., intend to over- or under-estimate a quantity of interest to avoid constraint violations). These linear approximations are called conservative linear approximations (CLAs) and were first proposed in BUASON2022 . As a sample-based approach, the CLAs are computed using the solution to a constrained regression problem across all samples within the range of power injection variability. <|MaskedSetence|> These linear approximations can also effectively incorporate the characteristics of more complex components (e.g., tap-changing transformers, smart inverters, etc.), only requiring the ability to apply a power flow solver to the system. <|MaskedSetence|> The accuracy and conservativeness of our proposed method is based on the information of the location of DERs and their power injections variability. As inputs, our method uses the net load profiles including the size of PVs when computing the CLAs. <|MaskedSetence|>
|
**A**: They linearly relate the voltage magnitudes at a particular bus to the power injections at all PQ buses.
**B**: Additionally, in the context of long-term planning, the CLAs can be readily computed with knowledge of expected DER locations and their potential power injection ranges.
**C**: In practice, this data can be obtained by leveraging the extensive existing research on load modeling and monitoring to identify the locations and capabilities of behind-the-meter devices (refer to, e.g., Grijalva2021 ; Schirmer2023 ).
An example of an overestimating CLA of the voltage magnitude at bus i𝑖iitalic_i is the linear expression
.
|
ABC
|
CAB
|
ABC
|
ABC
|
Selection 3
|
Our contributions in this article:
This work initiates the study of using ML for decoding heavy hexagonal code. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We identify a unique representative element from each such class. This approach reduces the number of error classes, resulting in a classification problem with fewer classes, and thus the training of ML model becomes faster and more accurate.
.
|
**A**: Being a subsystem code, the entire codespace of the heavy hexagonal code is partitioned into equivalent classes (details given in Sec. 4).
**B**: The heavy hexagonal code is a hybrid of surface code and Bacon-Shor code, where the later is a subsystem code [16].
**C**: By this property, distinct errors can be clubbed into certain equivalent error classes, called gauge equivalence.
|
BAC
|
BAC
|
BAC
|
ACB
|
Selection 3
|
<|MaskedSetence|> (2022) adopt a stochastic
optimization one. <|MaskedSetence|> (2022) assume lt=lsubscript𝑙𝑡𝑙l_{t}=litalic_l start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_l are fixed. Second,
we assume that the dynamics (𝒟𝒟{\cal D}caligraphic_D and ρ𝜌\rhoitalic_ρ) are
known (Assumption A1), whereas Ray et al. <|MaskedSetence|>
|
**A**: So, we assume that the loss functions ltsubscript𝑙𝑡l_{t}italic_l start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT are adversarially
chosen, whereas Ray et al.
**B**: geometric decay, whereas Ray et al.
**C**: (2022) assume.
|
BAC
|
BAC
|
BAC
|
BCA
|
Selection 3
|
In pursuit of the flexibility and advantages of ad-hoc arrays, this paper introduces a deep-learning-based 2D speaker localization leveraging large-scale ad-hoc microphone arrays. <|MaskedSetence|> Specifically, it comprises a feature extraction module, a DOA estimation module, a node selection algorithm, and a triangulation and clustering method. The DOA estimation module provides speaker directions. <|MaskedSetence|> <|MaskedSetence|> At last, the clustering algorithm conducts clustering on all rough speaker locations, and takes the clustering center as the final accurate speaker location.
.
|
**A**: The node selection algorithm selects ad-hoc nodes that yield highly reliable DOA estimates.
**B**: The framework of the proposed method is shown in Fig. 1.
**C**: The triangulation module yield a rough 2D speaker location from any two randomly selected ad-hoc nodes.
|
BAC
|
BAC
|
BAC
|
BCA
|
Selection 2
|
4.1 Limitations of the Experiments
In our human subject experiments, we attempted to control spurious variables by running the test at the same time of the day (6PM PST) and restricting the poll to adult respondents. However, there are many dimensions that, if changed, may affect the outcome, including age, level of instruction, geographic location, income and current profession, political inclination, awareness of the minimum wage in their location, and others. <|MaskedSetence|> Unlike other aggregate analyses where GPT-3 can be used at least as a coarse proxy, intersectional analysis cannot be conducted by using an LLMs since the user has no visibility into how the LLM is trained and how attributes of individuals who contributed to the training data affect the trained model. <|MaskedSetence|> I have chosen the most common jobs in the US for these experiments, which happen to be jobs for which minimum wage considerations apply. <|MaskedSetence|> I leave this extension for future investigation as well.
.
|
**A**: An additional dimension to explore is the type of job.
**B**: I have not tested the effect for jobs that are widely understood as being compensated at a level far above the minimum wage such as physicians, airline pilots, nurses, college professors, etc.
**C**: I reserve their analysis for future projects.
|
CAB
|
CAB
|
BCA
|
CAB
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> The success of influential models like BERT (Devlin et al., 2019), GPT (Radford et al., 2018), ViT (Dosovitskiy et al., 2020), and Whisper (Radford et al., 2022), also closely resembles the original implementation by Vaswani et al. (2017), which further supports the effectiveness of the transformer framework across different tasks and domains.
Finally, our model’s scalability is currently limited by its quadratic complexity in the context size. <|MaskedSetence|> This is a significant challenge that affects all transformer-based models and has garnered considerable attention. Recent developments to tackle this challenge include flash-attention (Dao et al., 2022), efficient transformers (Katharopoulos et al., 2020), and quantization techniques (Dettmers et al., 2022), which can address this problem, enhancing the feasibility of our approach for large-scale applications..
|
**A**: Our work builds upon well-established attention models, which have demonstrated their versatility and efficacy in various domains.
Although the core model is essentially a vanilla transformer, our architecture required careful adaptation to suit our specific requirements.
**B**: Although this limitation does not pose a problem in our particular use cases, it can impede the scaling of applications.
**C**: We designed our model to be set-to-set rather than sequence-to-sequence, handling data in a non-causal and non-autoregressive manner, and generating continuous values for regression.
|
ACB
|
BAC
|
ACB
|
ACB
|
Selection 4
|
Combining our findings in mix and mask ratios, we empirically find that i-MAE can compensate for the information loss at low ratios with the additional alleviation of more visible patches (lower mask ratio). <|MaskedSetence|> <|MaskedSetence|> (ii) the image-level distinguishing relationship between minority and majority (determined by mix ratio) is potent enough for i-MAE to encode the two images separately.
(3) ViT Backbone Architecture.
We studied whether different scales of ViT effect linear separation in Appendix. Our results show that larger backbones are not necessary for i-MAE to disentangle features on small datasets, as the insufficient training data cannot fully utilize the capability. <|MaskedSetence|>
|
**A**: Illustrated in Fig. 1, we display a case of i-MAE’s reconstruction succeeding in separating the features an input with 𝜶=0.1𝜶0.1\bm{\alpha}=0.1bold_italic_α = 0.1 mix factor and 0.5 masking ratio.
**B**: Through studying the mix ratio and masking ratio, we reveal that i-MAE can learn linearly separable features under two conditions: (i) enough information about both images must be present (determined by the trade-off between mask ratio and mix ratio).
**C**: However, large ViTs are crucial to the large-scale ImageNet-1K dataset..
|
ABC
|
ABC
|
ABC
|
CAB
|
Selection 3
|
<|MaskedSetence|> The variance of the noise is the only parameter in the algorithm, and governs how similar the new image is to the original image, as reported by Ho et al. (2020). <|MaskedSetence|> <|MaskedSetence|> For data augmentation we: (2a) obtain higher classification accuracy when trained on the Boomerang-augmented dataset versus no augmentation at all; and (2b) outperform SOTA synthetic data augmentation.
Finally, we show that Boomerang can be used for perceptual image enhancement. The images generated via local sampling: 3a) have better perceptual quality than those generated with competing methods; 3b) are generated faster than other deep-learning methods trained methods such as the Deep Image Prior (Ulyanov et al., 2020); and 3c) can be used for any desired upsampling factor without needing to train or fine-tune the network.
In Section 2 we discuss the training framework of diffusion models, introducing the forward and reverse processes. In Section 3 we introduce our proposed local sampling method—Boomerang—and provide insights on how the amount of added noise affects the locality of the resulting samples. Finally, we describe three applications (Sections 4, 5 and 6) that Boomerang can be used without any modification to the diffusion model pretraining..
|
**A**: We apply this technique to three applications: (1) data anonymization for privacy-preserving machine learning; (2) data augmentation; and (3) perceptual enhancement for low resolution images.
**B**: Boomerang earns its name from its principle mechanism—adding noise of a certain variance to push data away from the image manifold, and then using a diffusion model to pull the noised data back onto the manifold.
**C**: We show that the proposed local sampling technique is able to: (1a) anonymize entire datasets to varying degrees; (1b) trick facial recognition algorithms; and (1c) anonymize datasets while maintaining better classification accuracy when compared with SOTA synthetic datasets.
|
BAC
|
BAC
|
BAC
|
CAB
|
Selection 2
|
6 Conclusion
In this work we proposed equivariant modelling for zero-shot coordination. <|MaskedSetence|> We showed that EQC greatly improves on [22] by guaranteeing symmetry-equivariance of policies, and can be used as a policy-improvement operator to promote robust play for a diversity of agents. <|MaskedSetence|> <|MaskedSetence|> In addition, we empirically validated that agents perform well under our proposed architectural choices.
.
|
**A**: To this aim we presented EQC, which we validated over the complex Dec-POMDP task of Hanabi.
**B**: In this way, we also showed the extent to which symmetrizing improves overall performance in coordination.
**C**: Furthermore, we showed that EQC is “symmetry efficient”, such that it does not require access to all environment symmetries in order to perform well.
|
ABC
|
ABC
|
ABC
|
BCA
|
Selection 2
|
<|MaskedSetence|> Clearly dim(row(X)∩row(Y))≤dim(row(X))≤kdimensionrow𝑋row𝑌dimensionrow𝑋𝑘\dim(\mathrm{row}(X)\cap\mathrm{row}(Y))\leq\dim(\mathrm{row}(X))\leq kroman_dim ( roman_row ( italic_X ) ∩ roman_row ( italic_Y ) ) ≤ roman_dim ( roman_row ( italic_X ) ) ≤ italic_k. <|MaskedSetence|> Hence with probability tending to 1111 our random process produces matrices X𝑋Xitalic_X, W𝑊Witalic_W and Y𝑌Yitalic_Y such that (7) holds.
We now check that our random process produces a matrix Y𝑌Yitalic_Y such that (6) holds with probability tending to 1111. Since Y𝑌Yitalic_Y is an n×m𝑛𝑚n\times mitalic_n × italic_m matrix, rk(Y)≤min{n,m}rk𝑌𝑛𝑚\mathrm{rk}(Y)\leq\min\{n,m\}roman_rk ( italic_Y ) ≤ roman_min { italic_n , italic_m }. Since row(Y)⊆row(U)+row(V)row𝑌row𝑈row𝑉\mathrm{row}(Y)\subseteq\mathrm{row}(U)+\mathrm{row}(V)roman_row ( italic_Y ) ⊆ roman_row ( italic_U ) + roman_row ( italic_V ) which is a space of dimension at most t+k𝑡𝑘t+kitalic_t + italic_k, we see that rk(Y)≤t+krk𝑌𝑡𝑘\mathrm{rk}(Y)\leq t+kroman_rk ( italic_Y ) ≤ italic_t + italic_k and so rk(Y)≤min{t+k,m,n}rk𝑌𝑡𝑘𝑚𝑛\mathrm{rk}(Y)\leq\min\{t+k,m,n\}roman_rk ( italic_Y ) ≤ roman_min { italic_t + italic_k , italic_m , italic_n }. The argument in the previous paragraph shows that with probability tending to 1111 the row space of Y𝑌Yitalic_Y contains a linearly independent set of size t−δ+min{k,n−(t−δ)}=min{t+k−δ,n}𝑡𝛿𝑘𝑛𝑡𝛿𝑡𝑘𝛿𝑛t-\delta+\min\{k,n-(t-\delta)\}=\min\{t+k-\delta,n\}italic_t - italic_δ + roman_min { italic_k , italic_n - ( italic_t - italic_δ ) } = roman_min { italic_t + italic_k - italic_δ , italic_n }. When t+k≤m𝑡𝑘𝑚t+k\leq mitalic_t + italic_k ≤ italic_m we have δ=0𝛿0\delta=0italic_δ = 0 and so rk(Y)≥min{t+k,n}=min{t+k,n,m}rk𝑌𝑡𝑘𝑛𝑡𝑘𝑛𝑚\mathrm{rk}(Y)\geq\min\{t+k,n\}=\min\{t+k,n,m\}roman_rk ( italic_Y ) ≥ roman_min { italic_t + italic_k , italic_n } = roman_min { italic_t + italic_k , italic_n , italic_m }. <|MaskedSetence|> So in either case the equation (6) holds with probability tending to 1111.
.
|
**A**: dim(row(X)∩row(Y))≥min{k,n−(t−δ)}dimensionrow𝑋row𝑌𝑘𝑛𝑡𝛿\dim(\mathrm{row}(X)\cap\mathrm{row}(Y))\geq\min\{k,n-(t-\delta)\}roman_dim ( roman_row ( italic_X ) ∩ roman_row ( italic_Y ) ) ≥ roman_min { italic_k , italic_n - ( italic_t - italic_δ ) } with probability tending to 1111.
**B**: When t+k>m𝑡𝑘𝑚t+k>mitalic_t + italic_k > italic_m then δ=m−(t+k)𝛿𝑚𝑡𝑘\delta=m-(t+k)italic_δ = italic_m - ( italic_t + italic_k ) and so rk(Y)≥min{m,n}=min{t+k,n,m}rk𝑌𝑚𝑛𝑡𝑘𝑛𝑚\mathrm{rk}(Y)\geq\min\{m,n\}=\min\{t+k,n,m\}roman_rk ( italic_Y ) ≥ roman_min { italic_m , italic_n } = roman_min { italic_t + italic_k , italic_n , italic_m }.
**C**: Since we are assuming the first t−δ𝑡𝛿t-\deltaitalic_t - italic_δ rows of Y𝑌Yitalic_Y span a subspace of dimension t−δ𝑡𝛿t-\deltaitalic_t - italic_δ modulo U𝑈Uitalic_U, and since the row space of Y𝑌Yitalic_Y is spanned by n𝑛nitalic_n vectors, we see that dim(row(Y)∩U)≤n−(t−δ)dimensionrow𝑌𝑈𝑛𝑡𝛿\dim(\mathrm{row}(Y)\cap U)\leq n-(t-\delta)roman_dim ( roman_row ( italic_Y ) ∩ italic_U ) ≤ italic_n - ( italic_t - italic_δ ).
|
BAC
|
ACB
|
ACB
|
ACB
|
Selection 2
|
Without Data Augmentation.
Explicit dense connections may help bring more efficient usage of parameters, which makes the neural network less prone to overfit [38]. <|MaskedSetence|> To verify it, We train the models without data augment to reduce the influence of regularization from data augment. As shown in Table XVII, ResNet164 with DIA-LSMT achieves lower testing errors than the original ResNet164 and SENet. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: .
**B**: To some extent, the implicit and dense structure of our method may have a regularization effect.
Removal of Skip Connection.
**C**: Although the dense connections in DIA-LSTM are implicit, our method still shows the ability to reduce overfitting.
|
CBA
|
BCA
|
CBA
|
CBA
|
Selection 3
|
We presented a new solution concept for sequential imperfect-information games called observable perfect equilibrium that captures the assumption that all players are playing as rationally as possible given the fact that some players have taken observable suboptimal actions. <|MaskedSetence|> We showed that every observable perfect equilibrium is a Nash equilibrium, which implies that observable perfect equilibrium is a refinement of Nash equilibrium. <|MaskedSetence|> We showed that an OPE can be computed in polynomial time in two-player zero-sum games based on repeatedly solving a linear program formulation. We also argued that computation of OPE is more efficient than computation of the related concept of one-sided quasi-perfect equilibrium, which in turn has been shown to be more efficient than computation of quasi-perfect equilibrium and extensive-form trembling-hand perfect equilibrium.
We demonstrated that observable perfect equilibrium leads to a different solution in no-limit poker than EFTHPE, QPE, and OSQPE. <|MaskedSetence|> So we expect our analysis to extend to significantly more complex settings than the example considered..
|
**A**: While we only considered a simplified game called the no-limit clairvoyance game, this game encodes several elements of the complexity of full no-limit Texas hold ’em, and in fact conclusions from this game have been incorporated into some of the strongest agents for no-limit Texas hold ’em.
**B**: We also showed that observable perfect equilibrium is always guaranteed to exist.
**C**: We believe that this is more compelling than other solution concepts that assume that one or all players make certain types of mistakes for all other actions including those that have not been observed.
|
CBA
|
CBA
|
ACB
|
CBA
|
Selection 4
|
The definition of ERF indicates allowing more pixels to participate in predictions helps enlarge ERF. <|MaskedSetence|> This is made possible by a multi-head self-attention (MHSA) module [49]. We introduce a MHSA module between student’s projection head and contrastive loss. Information from different pixels are aggregated together to make a more robust prediction. This module only induces a handful of memory and computation cost during pre-training, and does not influence the fine-tuning phase. <|MaskedSetence|> <|MaskedSetence|> R18 w/ MHSA stands for ResNet-18 enhanced by a multi-head self-attention module.
.
|
**A**: R50 and R18 are ResNet-50 and ResNet-18 respectively.
**B**: From this perspective, we can enhance the student model by explicitly relating all pixels right before making predictions.
**C**: Equipped with the MHSA module, the ERF of ResNet-18 is slightly enlarged (shown in the third picture of Fig. 4).
Figure 4: Effective receptive field.
|
BCA
|
BCA
|
ACB
|
BCA
|
Selection 2
|
The focus of the present work is on high-dimensional relationships in which the sizes of inputs and/or outputs are large. Examples of such relationships can be found, for instance, in experimental full-field measurement data, such as our recent work on medical imaging [Lavigne et al., 2022], or in synthetic mesh data generated from finite element simulations, [Lorente et al., 2017; Pellicer-Valero et al., 2020]. Although DL techniques have generally shown great success as efficient surrogates to computationally expensive numerical methods in scientific computing, some of the popular existing machine learning approaches are still based on fully-connected deep networks which are not suitable for high-dimensional inputs/outputs. As an alternative, the application of Convolutional Neural Networks (CNNs) has proven a promising performance in a wide variety of applications, also including accelerating non-linear finite element/volume/difference simulations [Obiols-Sales et al., 2020; Rao and Liu, 2020; Deshpande et al., 2022; Zhao et al., 2023]. CNNs are designed to learn a set of fixed-size trainable local filters (convolutions), thus reducing the parameter space while being capable to capture non-linearities. <|MaskedSetence|> Moreover, one can observe that the CNN architectures have a close analogy to some iterative solution schemes known in scientific computing [Wang et al., 2020a; Brenner and Scott, 2008]. This provides them with an additional interpretation of being trainable iterative computational schemes to solve sets of non-linear equations, rather than general-purpose black-box approximators.
However, there is one important limitation that prevents CNNs from being of general purpose. The problem is that they only work well with grid-like structure data, such as images or structured meshes, which greatly hinders their use for many real-world applications where data is structured differently. Although there are some attempts to alleviate that problem in the context of FEM data, for instance, combining finite elements with an immersed-boundary method [Brunet et al., 2019], or embedding a precomputed coordinate mapping into the classic grid [Gao et al., 2021], the effectiveness of those methods is limited to simple irregular domains and remains challenging for complex geometries in general. <|MaskedSetence|> <|MaskedSetence|> Because of their ability to handle more general structured data, GNNs are gaining increasing importance also in surrogate modelling in scientific computing [Sanchez-Gonzalez et al., 2020; Vlassis et al., 2020; Pfaff et al., 2021; Gao et al., 2022] [Seo and Min, 2023; Jiang and Chen, 2023; Krokos et al., 2024]. However, these approaches are based on relatively simple message passing schemes, which are sub-optimal for learning on high non-linear regression tasks. In this work, we propose a novel local aggregation technique, which we denote as Multichannel Aggregation layer, MAg, which performs multichannel localised weighted aggregations, that can be seen as a direct extension of the traditional convolution layer in CNNs. Thanks to that, we are able to directly adapt some of the mechanisms/layers developed for CNNs to create efficient graph neural network architectures.
.
|
**A**: In the context of computational mechanics, local convolutions leverage the natural local correlation of nearby nodes, which leads to more efficient neural network architectures, both in terms of training- and prediction times.
**B**: They belong to the recently emerged family of Geometric Deep Learning (GDL) methods which focus on neural networks that can learn from non-Euclidean input such as graphs and, more generally, manifolds [Bronstein et al., 2017][Wu et al., 2021].
**C**: A definitive solution to that problem has only been brought by Graph Neural Networks (GNNs)–architectures that directly handle arbitrarily-structured inputs/outputs.
|
ACB
|
CAB
|
ACB
|
ACB
|
Selection 1
|
6.4 Real data applications
In addition to the synthetic datasets, we also use DiSP to find community memberships in several real-world networks. Table 6 presents basic information and summarized statistics of real-world networks used in this article. <|MaskedSetence|> <|MaskedSetence|> Since it is meaningless to detect community memberships for isolated nodes which have no connection with any other nodes, we need to remove these isolated nodes before processing data. For Facebook-like Social Network, the original data has 1899 nodes, after removing isolated nodes from both row and column sides, it has 1302 nodes. Unicode languages originally has 614 languages and we remove 86 languages that have not been spoken by the 254 countries. <|MaskedSetence|>
|
**A**: Among these networks, Crisis in a Cloister, Highschool, and Facebook-like Social Network are directed weighted networks whose row nodes are the same as column nodes while the other networks are bipartite.
**B**: Facebook-like Social Network can be downloaded from https://toreopsahl.com/datasets/#online_social_network (accessed on 9 June 2023) while the other networks can be downloaded from http://konect.cc/networks (accessed on 9 June 2023).
**C**: For Marvel, the original data has 19428 works and we remove 6486 works that have no connection with the 6486 characters.
.
|
CAB
|
ABC
|
ABC
|
ABC
|
Selection 2
|
Towards this, surrogate metrics have been proposed that perform model selection using only observational data. <|MaskedSetence|> Recently, the focus has shifted towards designing surrogate metrics that approximate the true effect and compute its deviation from the estimator’s treatment effect (Nie & Wager, 2021; Saito & Yasui, 2020), and they have also been shown to be more effective than other metrics (Schuler et al., 2018; Alaa & Van Der Schaar, 2019). However, most of these evaluation studies have been performed only on a few synthetic datasets, therefore, the trend in such studies could be questionable. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Hence, we have a poor understanding of which surrogate criteria should be used for model selection.
Contributions.
.
**B**: Earlier proposals were based on evaluating the nuisance models associated with the estimators, and the utility of decision policy (Zhao et al., 2017) based on the heterogeneous treatment effects of the estimator.
**C**: Also, there is often a lack of fair comparison between the various metrics as some of them are excluded from the baselines when authors evaluate their proposed metrics.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> Even though the robot faces down (wrt the top view), it cannot escape from the recessed region. <|MaskedSetence|> We show the specific wall and its corresponding location on the top view with the magenta arrow. (c) Another situation was observed where the robot crashed into a glass door due to the low height of the wooden pane around it. We show the glass door and its corresponding location on the top view with the magenta arrow..
|
**A**: Our BRAT slices indicate that the robot is able to traverse through hallways reasonably well; however, sometimes, it fails.
Figure 10:
(a) Notice the highlighted area in the top-right location of the BRAT for the robot heading of −π/2𝜋2-\pi/2- italic_π / 2 radians.
**B**: (b) On simulating the robot from one of the highlighted states, we saw that the CNN predicts a waypoint into the wall to its right and crashes the robot.
**C**: Misunderstanding certain obstacles as traversable.
|
CAB
|
CAB
|
ACB
|
CAB
|
Selection 4
|
An early detailed study of dynamical properties of some examples can be found in [14]. <|MaskedSetence|> For r=1𝑟1r=1italic_r = 1, Ward connected such automata to number-theoretical dynamical systems, and used this to provide very clean proofs of some properties related to periodic points and entropy [19] (see also [2, Chapter 9]). By contrast, for general r𝑟ritalic_r several ‘bits’ (units of information, in our case elements111Sometimes these are called ‘pits’ for 𝐅psubscript𝐅𝑝\mathbf{F}_{p}bold_F start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT and ‘qits’ for 𝐅qsubscript𝐅𝑞\mathbf{F}_{q}bold_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT, but we will not use this terminology. <|MaskedSetence|> <|MaskedSetence|> In fact, most (bio-)chemical applications use state spaces of greater complexity than merely a single bit.
In this paper we link the dynamics of such automata to that of endomorphisms of the vector group 𝐆ar(𝐅¯p)superscriptsubscript𝐆𝑎𝑟subscript¯𝐅𝑝\mathbf{G}_{a}^{r}(\overline{\mathbf{F}}_{p})bold_G start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ( over¯ start_ARG bold_F end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ), where 𝐅¯psubscript¯𝐅𝑝\overline{\mathbf{F}}_{p}over¯ start_ARG bold_F end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT is the algebraic closure of 𝐅psubscript𝐅𝑝\mathbf{F}_{p}bold_F start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT. Results on periodic points for the cellular automaton follow in a straightforward manner from our previous study of dynamics on algebraic groups [2], similarly to how Ward was able to use prior number-theoretical work for r=1𝑟1r=1italic_r = 1. Our results apply to general r⩾1𝑟1r\geqslant 1italic_r ⩾ 1 (and thus, also to higher order additive cellular automata over 𝐅psubscript𝐅𝑝\mathbf{F}_{p}bold_F start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT) and allow us to avoid considering special cases, only requiring some algebra as in [2, §5.2]..
|
**A**: of 𝐅psubscript𝐅𝑝\mathbf{F}_{p}bold_F start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT) are stored in one state; the corresponding systems are called multiband linear cellular automata [11], and the dynamics of such automata is considered much more complex [5, p. 64].222We will refrain from referring to r𝑟ritalic_r as ‘dimension’, since that word is used in the theory of automata for the grid size dimension, which in this paper is always 1111; nevertheless, via our correspondence r𝑟ritalic_r becomes the dimension of the associated vector group.
**B**: The interest in such multiband automata arises from the fact that they can be used to simulate higher-order linear cellular automata, i.e. automata where transition rules depend not only on the immediately preceding configuration, but on finitely many earlier configurations ([12, Proposition 3.1], [6]).
**C**: A very general framework of group-based automata can be found in [3, Chapter 8], where invertibility and the Garden-of-Eden theorem are discussed (see also [12, Proposition 3.2]).
|
CAB
|
CAB
|
BAC
|
CAB
|
Selection 2
|
Today you will be collecting data on push latch door opening. <|MaskedSetence|> For the latched door, we do as follows:
Press the “clutch” or middle finger button on the right controller to move the robot. <|MaskedSetence|> <|MaskedSetence|> can be pushed open without turning the handle again, it is a success. If the fingers snap off, or the robot enters an error state, or the robot stays in contact with the door after opening it, it is a failure. If you are using the haptics application you will feel some buzzing in your hand when you come into contact with objects.
When you feel this buzzing become MUCH stronger, it means you are surpassing the safe force limits of the robot, back away and try again..
|
**A**: If the door is unlatched, i.e.
**B**: While holding the clutch, place one finger on top of the latch and press straight down.
**C**: Your goal will be to open the door as gently and efficiently as possible.
|
CBA
|
CBA
|
CBA
|
ACB
|
Selection 2
|
<|MaskedSetence|> These efforts primarily concentrated on adapting video analytics configurations for preprocessing and model selection to strike a balance between inference accuracy and latency. <|MaskedSetence|> <|MaskedSetence|> For instance, Distream [22] focused on cross-camera workload balancing and partitioning between smart cameras and a centralized edge cluster. A2superscript𝐴2A^{2}italic_A start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT aimed at minimizing costs through dispatching inference requests among different data centers. CrossVision [23] and Polly [23] addressed the reduction of cross-camera content redundancy and workload balancing across edge cameras.
.
|
**A**:
To enhance video analytics performance within the device-edge-cloud architecture, several studies ([11, 12, 13, 14, 15, 16]) investigated offloading strategies.
**B**: In addressing bandwidth consumption, other studies ([17, 18, 19]) employed techniques such as frame filtering, inference difficulty classification, and resolution downsizing to reduce communication costs during video transmission while preserving recognition accuracy.
**C**: Additionally, some works ([20, 21]) delved into optimizing configurations for video analytics.
Several studies (e.g., [22], [3], [23], [24]) explored video analytics in scenarios involving multiple edge nodes or data centers.
|
ABC
|
ABC
|
ABC
|
ACB
|
Selection 1
|
Contribution. Our solution enables UCDs to efficiently participate in multidevice FL alongside their upstream APs, by incorporating a customizable data selection scheme and a partition-based training regime. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Our extensive empirical analyses, including measurement of cost, imbalance data, participation heterogeneity, and connection probability, indicate that Centaur achieves higher efficiency (i.e., high accuracy and low cost) across most scenarios. Our code is open-source to encourage further advancements in this domain.
.
|
**A**: Our experiments on a small testbed of RaspberryPis validate that Centaur is promising in decreasing overall storage requirements and training time due to employing effective data selection.
**B**: The impact is a decrease in computational and communication costs, as well as an improvement in the model’s test accuracy.
**C**: Moreover, our analysis on the mobility of UCDs accounts for the real-world spatiotemporal mobility of edge devices participating in FL.
|
BAC
|
BAC
|
BAC
|
BAC
|
Selection 2
|
<|MaskedSetence|> The nodes in the graph represent the houses in the distribution grid. In this power flow simulation, all the houses are placed 40 m away from each other, and NAYY 4x150 SE lines are used to connect them. These are the most common lines used in Germany [18] and 40 m is a common distance between neighbors in a rural German distribution grid [12]. We assume that the root mean square (RMS) voltage at the transformer is constant for this simulation model.
To evaluate the impact of the measurement error of the distributed smart plugs on the monitoring of the grid area, we implement several scenarios with different numbers of smart plugs installed in the grid area. <|MaskedSetence|> For each node in the original bus system, we add and connect another node representing the house and the smart plug inside the house. We also add random resistive loads between 0 kW and 6.5 kW with a power factor of 1.0 to the house nodes, representing household appliances and electric vehicle chargers. <|MaskedSetence|> The simulated voltage levels are the ground truth against which we will later compare our monitoring results..
|
**A**: The DSO sees the total load at the feeder, but does not know where the individual loads are located.
**B**:
The transformer T is connected to the 20 kV grid on the primary side and to the 400 V distribution grid on the secondary side.
**C**: We also compare the monitoring results that are based on the smart plugs with the unmodified firmware with the results based on the smart plugs with the modified firmware version.
All scenarios are based on the IEEE bus system 37.
|
BCA
|
BCA
|
BAC
|
BCA
|
Selection 4
|
It will be convenient to regard our algorithms as having two separate stages, although the two stages share some ingredients. In the first stage, the algorithm iteratively subsamples and contracts the graph to estimate the strengths of all edges within a constant factor. <|MaskedSetence|> We address these two stages in the next two subsections respectively.
We note that these algorithms are very similar to those presented by [RSW18]; the key difference is that whenever the sparsification algorithm in [RSW18] subsamples the graph, it does so independently for each edge. <|MaskedSetence|> <|MaskedSetence|> We modify their subsampling procedures to obtain algorithms that do not sample completely independently, but still have the concentration properties that we need.
.
|
**A**: With weighted graphs, we would like to sample edges proportionately to their weight, and this cannot be done independently without knowledge of the graph.
**B**: This works because [RSW18] focuses on unweighted graphs.
**C**: In the second stage, the algorithm uses these edge strength estimates to construct the sparsifier, following the ideas of [BK15].
|
CBA
|
CBA
|
ABC
|
CBA
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> Addressing this challenge requires novel AV platoon-control strategies that consider human driver behavior, rather than relying solely on drivers to adapt to AVs. <|MaskedSetence|> However, a major limitation of these models is their inability to adapt to practical model-based control systems. Thus, interpretable and implementable HV–AV interaction models that are suitable for model-based control systems are essential. These models should aim to enhance safety and achieve other desired outcomes in mixed-traffic environments.
.
|
**A**: This suggests that drivers are less familiar with the dynamic behavior of AV platoons, which results in a higher risk of accidents.
**B**: Previous research on HV–AV interactions in car-following scenarios has proposed several HV models [6].
**C**:
A recent study focusing on traffic accidents involving AVs showed that 64.2% of accidents in mixed-traffic scenarios involved HVs rear-ending AVs, a significant increase compared with 28.3% in conventional HV-only traffic [5].
|
CAB
|
CAB
|
CAB
|
BCA
|
Selection 1
|
Thus far, we have shown STk𝑘kitalic_kM’s ability to cluster moving objects of the simplest kind: points traveling in two dimensions. However, moving objects can be much more complex; they can be any evolving, high-dimensional feature vectors. <|MaskedSetence|> <|MaskedSetence|> Extracting ROIs in videos is much more challenging. <|MaskedSetence|> However, the aggregation is done over consecutive or short time windows, thereby failing to capture a global perspective [21]. This is where we believe STGk𝑘kitalic_kM could be of value..
|
**A**: Over the years, approaches have become more sophisticated, experimenting with pre- and post-processing, ensembling, and the integration of clustering objectives into the functions being optimized by neural networks [13, 17, 19].
**B**: Since STk𝑘kitalic_kM sets the benchmark in the two dimensional case, we seek to apply it to more interesting machine learning applications, such as region of interest (ROI) detection and tracking in videos.
Variants of k𝑘kitalic_k-means have successfully been applied in the context of image segmentation in literature dating all the way back to the 1980s [9, 23].
**C**: Most methods use deep learning to extract ROIs on a frame-by-frame basis and aggregate them over time, as in [26].
|
BAC
|
BAC
|
BAC
|
ACB
|
Selection 2
|
In this study, we focus on health misinformation Wei Peng and Meng (2023); Swire-Thompson and Lazer (2019). Before the COVID-19 pandemic, health misinformation had already attracted the attention of researchers due to the insurgence of childhood vaccination misinformation on social media Wang et al. (2019). However, the burgeoning health misinformation during the COVID-19 pandemic Kouzy et al. <|MaskedSetence|> (2020) and its associated negative impacts (e.g., vaccination hesitancy Loomba et al. <|MaskedSetence|> (2020) has brought heightened attention from the research community to this pressing issue.
With the vast daily news output, it is unrealistic for human fact-checkers to verify every detail. As a result, many have turned to machine learning models to aid the process of claim verification and misinformation detection in its diverse forms Rani et al. (2022); Yuliani et al. (2019); Della Vedova et al. (2018); Burfoot and Baldwin (2009); De Sarkar et al. (2018); Martino et al. (2020); Khanday et al. <|MaskedSetence|>
|
**A**: (2020); Cuan-Baltazar et al.
**B**: (2021)..
**C**: (2021); Roozenbeek et al.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 3
|
7 Conclusions
Over the last few decades, a lot of important and impressive datasets have been designed for fake audio detection. <|MaskedSetence|> The fake utterances are mostly generated by altering timbre, prosody, linguistic content or channel noise of the original audio, which don not cover scene-fake audio: the original scene of the utterance is manipulated by another scene. <|MaskedSetence|> This paper presents the design policy, collection of real utterances, manipulation of fake audio and evaluation metrics of the SceneFake dataset. The fake audio is enhanced by using several speech enhancement models. We provide the label information of speech enhancement methods used in the fake utterances to researchers. <|MaskedSetence|> The results show that it is more challenging to detect unknown scene manipulated audio compared with the known one. We strongly believe that the publicly available SceneFake dataset and benchmark results will not only facilitate reproducible research but also further accelerate and foster research on fake audio detection and audio forensics. Our work is preliminary, and some limitations still exist, as mentioned in Section 6. Future work includes hopefully addressing the aforementioned limitations, as well as further studying the relationship between speech enhancement performance and fake audio detection results..
|
**A**: We also report the baseline results of scene fake audio detection on our dataset.
**B**: We design the first dataset named SceneFake that considers a counterfeit method where a scene of the audio is forged by another scene using speech enhancement technologies.
**C**: However,
as far as we are aware of, the existing literature in the area of fake audio detection considers fake types mainly including: impersonation, speech synthesis, voice conversion, replay and audio manipulation.
|
CBA
|
CBA
|
BAC
|
CBA
|
Selection 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.