text_with_holes
stringlengths
114
4.02k
text_candidates
stringlengths
58
3.84k
A
stringclasses
6 values
B
stringclasses
6 values
C
stringclasses
6 values
D
stringclasses
6 values
label
stringclasses
4 values
Nevertheless, interpretability is rarely a focal point in these approaches, which prioritize low memory usage and run-time. Our approach is closer in spirit to Sparse Random Fourier Features [18], which uses random Fourier features approximations for ARD kernels, tailoring the objective function and optimization constraints to induce feature selection. Still, significant differences set us apart. <|MaskedSetence|> <|MaskedSetence|> In fact, using mini-batches is an indispensable component of RFFNet. <|MaskedSetence|> Third, the relevances output by SRFF after a single run are unreliable sources of feature importances. Finally, RFFNet is implemented as an efficient library and can be promptly integrated into data analysis pipelines based on the widely adopted scikit-learn standard..
**A**: SRFF, instead, is based on alternately minimizing two objectives, a procedure that can cause indefinite cycling of the algorithm [46] and often leads to poor performance. **B**: First, RFFNet handles any differentiable loss function, while SRFF is specifically designed for squared error loss problems in regression. **C**: Second, RFFNet is based on jointly minimizing the kernel relevances and remaining parameters with efficient stochastic gradient descent methods.
BCA
ACB
BCA
BCA
Selection 4
<|MaskedSetence|> In addition, the development subset is also divided into train and validation subsets. However, three of the collected databases -FruitVeg81, UECFood-256, and Food-101- do not contain this division. <|MaskedSetence|> Around 80% of the images comprise the development subset, with the train and validation subsets also distributed around 80% and 20% of the development subset, respectively. <|MaskedSetence|> It is important to remark that no images are duplicated across the three subsets (train, validation, and test) in any of the seven databases. Similarly to [13], Top-1 (Top-1 Acc.) and Top-5 classification accuracy (Top-5 Acc.) are used as evaluation metrics..
**A**: The remaining images correspond to the test subset (around 20%). **B**: In such cases, we employ a similar procedure as presented in [59]. **C**: IV-B Experimental Protocol For reproducibility reasons, we adopt the same experimental protocol considered in the collected databases, dividing them into development and test subsets following each corresponding subdivision.
CAB
CBA
CBA
CBA
Selection 2
<|MaskedSetence|> <|MaskedSetence|> Similar statements hold for the computation of γk⁢(n)subscript𝛾𝑘𝑛\gamma_{k}(n)italic_γ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_n ). <|MaskedSetence|> Analogous considerations can be made for the problem of computing the ν𝜈\nuitalic_ν most significant bits of αksubscript𝛼𝑘\alpha_{k}italic_α start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT (or γksubscript𝛾𝑘\gamma_{k}italic_γ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT). Here, the input size is log2⁡νsubscript2𝜈\log_{2}\nuroman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_ν and the time becomes triple exponential in the input size. Whether these problems inherently exhibit high complexity or they can be solved efficiently by exploiting a not yet uncovered deeper structure remains to be seen. .
**A**: Therefore, at the state of the art, we can place these problems in the complexity class EXPSPACE ⊆\subseteq⊆ 2-EXPTIME, but not in EXPTIME and, a fortiori, not in PSPACE ⊆\subseteq⊆ EXPTIME222Technically, these are traditionally defined as classes of decision problems, hence our statements strictly apply to suitable decision versions of computing the constants of interest.. **B**: Observe that the input size is log2⁡nsubscript2𝑛\log_{2}nroman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_n, the number of bits needed to specify the problem input n𝑛nitalic_n. **C**: From the perspective of computational complexity, we remark that, for the problem of computing αk⁢(n)subscript𝛼𝑘𝑛\alpha_{k}(n)italic_α start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_n ), given input n𝑛nitalic_n (for a fixed k𝑘kitalic_k), only algorithms that run in doubly exponential time and use exponential space are currently known, including those presented in this paper.
CBA
CBA
CBA
BAC
Selection 1
<|MaskedSetence|> 2021; Fu et al. 2021; Liu et al. <|MaskedSetence|> 2017) removes shadows in an end-to-end manner. DSC (Hu et al. <|MaskedSetence|> 2021) is a two-stage context-aware network..
**A**: 2019a) captures global and context information from the direction-aware spatial attention module. SP+M-Net (Le and Samaras 2019) and SP+M+I-Net (Le and Samaras 2021) remove shadow using image decomposition. CANet (Chen et al. **B**: Recently, supervised learning-based shadow removal methods (Chen et al. **C**: 2023a, b) have shown promising performance. DeshadowNet (Qu et al.
BCA
BCA
BCA
ACB
Selection 2
<|MaskedSetence|> <|MaskedSetence|> To investigate the condition about jumps, suppose that u∈H1⁢(Ω)𝑢superscript𝐻1Ωu\in H^{1}(\Omega)italic_u ∈ italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( roman_Ω ). To fulfill requirement (iii) of Theorem 1.2, we need to ensure that Ni,j⁢(u)=Ni,j′⁢(u)subscript𝑁𝑖𝑗𝑢subscript𝑁𝑖superscript𝑗′𝑢N_{i,j}(u)=N_{i,j^{\prime}}(u)italic_N start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ( italic_u ) = italic_N start_POSTSUBSCRIPT italic_i , italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_u ) for all 1≤i≤N1𝑖𝑁1\leq i\leq N1 ≤ italic_i ≤ italic_N, 1≤j,j′≤qiformulae-sequence1𝑗superscript𝑗′subscript𝑞𝑖1\leq j,j^{\prime}\leq q_{i}1 ≤ italic_j , italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ≤ italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. <|MaskedSetence|> However, when there are more than two sides, there is always a pair j,j′𝑗superscript𝑗′j,j^{\prime}italic_j , italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT such that σi,j≠σi,j′subscript𝜎𝑖𝑗subscript𝜎𝑖superscript𝑗′\sigma_{i,j}\neq\sigma_{i,j^{\prime}}italic_σ start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ≠ italic_σ start_POSTSUBSCRIPT italic_i , italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT. As soon as this is the case, one can easily find a smooth function u𝑢uitalic_u such that that Ni,j⁢(u)≠Ni,j′⁢(u)subscript𝑁𝑖𝑗𝑢subscript𝑁𝑖superscript𝑗′𝑢N_{i,j}(u)\neq N_{i,j^{\prime}}(u)italic_N start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ( italic_u ) ≠ italic_N start_POSTSUBSCRIPT italic_i , italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_u ), showing that eq. (4.2) cannot meet the requirements. A remedy is to define .
**A**: If qi=2subscript𝑞𝑖2q_{i}=2italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 2, i.e., if there are just two sides of ΓΓ\Gammaroman_Γ around 𝒙isubscript𝒙𝑖\boldsymbol{x}_{i}bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT (as for example when ΓΓ\Gammaroman_Γ is a manifold with boundary), this can be arranged by choosing σi,1=σi,2subscript𝜎𝑖1subscript𝜎𝑖2\sigma_{i,1}=\sigma_{i,2}italic_σ start_POSTSUBSCRIPT italic_i , 1 end_POSTSUBSCRIPT = italic_σ start_POSTSUBSCRIPT italic_i , 2 end_POSTSUBSCRIPT in eq. (4.2). **B**: It guarantees the conditions Πh⁢uh=uhsubscriptΠℎsubscript𝑢ℎsubscript𝑢ℎ\Pi_{h}u_{h}=u_{h}roman_Π start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT = italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT and (Πh⁢u)|Γ=0(\Pi_{h}u)_{|\Gamma}=0( roman_Π start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT italic_u ) start_POSTSUBSCRIPT | roman_Γ end_POSTSUBSCRIPT = 0 when u|Γ=0u_{|\Gamma}=0italic_u start_POSTSUBSCRIPT | roman_Γ end_POSTSUBSCRIPT = 0. **C**: The formula (4.2) comes close to satisfying the requirements.
CBA
CBA
CBA
BCA
Selection 3
We investigate the criticality of credibility adjustments to successful CCP iterations as well as the effectiveness of subsampling via Algorithm 3. <|MaskedSetence|> <|MaskedSetence|> We often find quick and catastrophic degradation to maximum entropy pseudo-labels when using softmax. When considering the average strength (maximum value) of pseudo-labels, it is clear that credibility strongly differentiates correct and incorrect pseudo-labels, making it a better fit as a measure of confidence. Recall in Fig. 3 that, when faced with uncertainty, an Xent gradient with a softmax label pushes the network to produce a high entropy pseudo-label. <|MaskedSetence|>
**A**: We also omit scaling, clipping, and subsampling when using softmax such that it resembles the SEAL algorithm. **B**: That mirrors what we see here. . **C**: We focus on the base case and few-label experiments, however, repeating these experiments with the other data variables provided a similar result. In Fig. 4, we study the effect of credibility adjustments by measuring the difference in CCP iteration performance when traditional softmax functions replace them.
CAB
CAB
CAB
CAB
Selection 3
<|MaskedSetence|> Sentiment Analysis of COVID19 Here, we exemplify the benefits of training an ANN in real-time from streaming data. To this end, we analyze the impact of concept drifts on a sentiment analysis setting, specifically drifts that occurred during and due to the COVID19 pandemic. First, we trained a large Word2Vec model using 20% of English Wikipedia plus the Sentiment140 dataset (Go et al., 2009). Then, we trained an LSTM model (ten, 2022) using the Sentiment140 dataset together with the word embeddings we trained previously. After three epochs, we reached 78% accuracy on the training and the test set. However, language is always evolving. Thus, this model may not sustain its accuracy for long if deployed to analyze streaming data in real-time. We exemplify this by fine-tuning the word embeddings with 2M additional tweets published from November 1st, 2019 to October 10th, 2021 containing the following keywords: covid19, corona, coronavirus, pandemic, quarantine, lockdown, sarscov2. <|MaskedSetence|> <|MaskedSetence|>
**A**: 4.4. **B**: However, despite being small, this difference is concentrated onto specific keywords. . **C**: Then, we compared the previously trained word embeddings and the fine-tuned ones and found an average cosine difference of only 2%.
CBA
ACB
ACB
ACB
Selection 2
<|MaskedSetence|> <|MaskedSetence|> The weight distribution is quite uniform and flat, which is easy to quantize. <|MaskedSetence|> Outliers make activation quantization difficult. The scale of outliers in activations is ∼100×\sim 100\times∼ 100 × larger than most of the activation values..
**A**: 1. **B**: Activations are harder to quantize than weights. **C**: Previous work has shown that quantizing the weights of LLMs with INT8 or even with INT4 does not degrade accuracy (Dettmers et al., 2022; Yao et al., 2022; Zeng et al., 2022), which echoes our observation. 2.
ABC
ABC
ABC
ABC
Selection 1
<|MaskedSetence|> <|MaskedSetence|> 4 that the non-zero initial conditions, in general, don’t decay to zero in finite time. Next, we discuss the performance comparison of TV-OKID and the Information-state approach. <|MaskedSetence|> The results for the nonlinear systems are shown in Fig 6. The error shown in the figures is the 1-norm of the mean error between the true response and the predicted response from 100 independent simulations, across all the output channels. .
**A**: First, we show the importance of having a system identification technique that is immune to non-zero initial conditions. **B**: For the oscillator, two experiments were performed - one with zero-initial conditions and another with non-zero initial conditions, and the results are shown in Fig. 5. **C**: We show in Fig.
ACB
ACB
ACB
ACB
Selection 4
Approximate message passing (AMP) algorithms. AMP is a family of iterative algorithms that has been applied to several problems in high-dimensional statistics, including estimation in linear models [DMM09, BM11, KMS+12], generalized linear models [Ran11, SR14, SC19], and low-rank matrix estimation [DM14, RFG09, LKZ17]. For a broad overview, we refer the reader to [FVRS22]. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Furthermore, they have been used – in a non-mixed setting – to combine linear and spectral estimators [MTV22]. .
**A**: A key feature of AMP algorithms is that under suitable model assumptions, the empirical joint distribution of their iterates can be exactly characterized in the high-dimensional limit, in terms of a simple scalar recursion called state evolution. **B**: By taking advantage of this characterization, AMP methods have been used to derive exact high-dimensional asymptotics for convex penalized estimators such as LASSO [BM12], M-estimators [DM16], logistic regression [SC19], and SLOPE [BKRS20]. **C**: AMP algorithms have been initialized via spectral methods in the context of low-rank matrix estimation [MV21c] and generalized linear models [MV21a].
ABC
ABC
ABC
ABC
Selection 1
<|MaskedSetence|> This kind of methods [15, 16] tend to explicitly minimize a divergence that measures the gap between the source and target domains. A typical discrepancy metric, Maximum Mean Discrepancy (MMD) [15], has been extensively applied to domain adaptation tasks and  [17] further proposed multi-kernels MMD. Moreover, Weighted-MMD (WDAN)  [16] integrated class prior with original MMD by class-specific auxiliary weights. Another measure that can be utilized for domain alignment is covariance, leading to a method Deep Correlation Alignment (Deep CORAL) [18]. These methods only consider aligning the marginal distribution, and however, Deep Transfer Network (DTN) [19] proposed a conditional alignment method using pseudo labels of target domain.  [20] learned a joint distribution by dynamically quantifying the relative importance of marginal and class-conditional distribution. Similarly,  [21] dynamically aligned both the feature and label spaces. <|MaskedSetence|> <|MaskedSetence|> In recent developments, they have proposed applying structural risk minimization not only at the domain-level but also at the class-level to enhance discrimination [25]. Another branch is based on optimal transport which is illustrated in next subsection. .
**A**:  [22] employed a teacher-student framework for source and target domains and then aligned their prediction. **B**: Along with jointly adapting the marginal and conditional distributions, CKET [23] further attempted to transfer discriminative information by enforcing the structure consistency between the original feature space and the latent feature space.The same team further underscored the significance of fine-grained cluster structures in the source and target domains [24] to enhance discriminative power. **C**: Discrepancy-based domain alignment.
CAB
CAB
CAB
CAB
Selection 3
2.4 Directly Trained SNNs for RL Surrogate gradient methods introduce backpropagation (BP) into SNNs successfully and make it easy to train deep SNNs directly. <|MaskedSetence|> (2022); Liu et al. (2022) have used directly trained SNNs and fixed coders to construct DSQN. <|MaskedSetence|> <|MaskedSetence|>
**A**: Recently, a few papers Chen et al. **B**: Although these methods have high energy efficiency, they do not form a complete framework and have a poor trade-off between accuracy and latency. **C**: In addition, their applications are also limited to discrete action-space environments. .
ABC
ABC
CAB
ABC
Selection 4
Let Km,nsubscript𝐾𝑚𝑛K_{m,n}italic_K start_POSTSUBSCRIPT italic_m , italic_n end_POSTSUBSCRIPT denote the complete bipartite graph with partite sets of size m𝑚mitalic_m and n𝑛nitalic_n, respectively. In case m=1𝑚1m=1italic_m = 1, we get a star graph with n𝑛nitalic_n pendent vertices. Let Knsubscript𝐾𝑛K_{n}italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT, Pnsubscript𝑃𝑛P_{n}italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT, and Cnsubscript𝐶𝑛C_{n}italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT denote the complete graph, the path graph, and the cycle graph on n𝑛nitalic_n vertices, respectively. Note that the spectrum is always real, but the per-spectrum may not be real. <|MaskedSetence|> Borowiecki (1985) [4] generalized this observation by showing that a bipartite graph G𝐺Gitalic_G contains no Cksubscript𝐶𝑘C_{k}italic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT for any k≡0𝑘0k\equiv 0italic_k ≡ 0 (mod 4444) if and only if σ⁢(G)={λ1,λ2,…,λn}𝜎𝐺subscript𝜆1subscript𝜆2…subscript𝜆𝑛\sigma(G)=\{\lambda_{1},\lambda_{2},\dots,\lambda_{n}\}italic_σ ( italic_G ) = { italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_λ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } and σp⁢(G)={i⁢λ1,i⁢λ2,…,i⁢λn}subscript𝜎𝑝𝐺𝑖subscript𝜆1𝑖subscript𝜆2…𝑖subscript𝜆𝑛\sigma_{p}(G)=\{i\lambda_{1},i\lambda_{2},\dots,i\lambda_{n}\}italic_σ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ( italic_G ) = { italic_i italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_i italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_i italic_λ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }. <|MaskedSetence|> <|MaskedSetence|> Earlier, Borowiecki and Jóźwiak (1983) had posed the following open problem. .
**A**: Thus, if G1subscript𝐺1G_{1}italic_G start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and G2subscript𝐺2G_{2}italic_G start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are two bipartite graphs without Cksubscript𝐶𝑘C_{k}italic_C start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT for any k≡0𝑘0k\equiv 0italic_k ≡ 0 (mod 4444), then G1subscript𝐺1G_{1}italic_G start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and G2subscript𝐺2G_{2}italic_G start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are cospectral if and only if they are per-cospectral. **B**: Sachs (1978) had noted that if G𝐺Gitalic_G is a tree such that σ⁢(G)={λ1,λ2,…,λn}𝜎𝐺subscript𝜆1subscript𝜆2…subscript𝜆𝑛\sigma(G)=\{\lambda_{1},\lambda_{2},\dots,\lambda_{n}\}italic_σ ( italic_G ) = { italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_λ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }, then σp⁢(G)={i⁢λ1,i⁢λ2,…,i⁢λn}subscript𝜎𝑝𝐺𝑖subscript𝜆1𝑖subscript𝜆2…𝑖subscript𝜆𝑛\sigma_{p}(G)=\{i\lambda_{1},i\lambda_{2},\dots,i\lambda_{n}\}italic_σ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ( italic_G ) = { italic_i italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_i italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_i italic_λ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }. **C**: In other words, the permanental polynomial is not any more useful than the characteristic polynomial in characterization of such graphs.
CBA
BAC
BAC
BAC
Selection 2
7 Conclusions This paper presents the innovative CPPF++ method tailored for sim-to-real pose estimation. <|MaskedSetence|> To counteract voting collision, we innovatively model voting uncertainty by gauging the probabilistic distribution of each point pair in the canonical space, complemented by an noisy tuple filtering technique to eliminate votes linked to backgrounds or clutters. <|MaskedSetence|> Alongside this, we unveil the DiversePose 300 dataset, curated to provide a complement assessment of top-tier methods in diverse real-world conditions. <|MaskedSetence|>
**A**: Rooted in the foundational point-pair voting scheme of CPPF, our approach reinterprets it using a probabilistic perspective. **B**: Our method also introduces several useful modules is further enhance the performance: N𝑁Nitalic_N-point tuple feature, online alignment optimization and tuple feature assemble. **C**: Our empirical findings validate the effectiveness of CPPF++, highlighting a marked decrease in sim-to-real gap on both category-level and instance-level datasets. .
ABC
ABC
ABC
ABC
Selection 2
We use synthetic and real-world datasets. The synthetic dataset consists of bidiag, chain, collider, jungle, fulldag and random DAGs, each with 25252525 nodes. The variable distributions are categorical, with 10101010 categories222We create the datasets using the code provided by Lippe et al. (2022). <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: See Appendix E.1 for details.. **B**: Both synthetic and real-world graphs are commonly used as benchmarking datasets  (Ke et al., 2019; Lippe et al., 2022; Scherrer et al., 2021). . **C**: The real-world dataset consists of alarm, asia, cancer, child, earthquake, and sachs graphs, taken from the BnLearn repository (Scutari, 2010).
BAC
ACB
ACB
ACB
Selection 4
Winoground Thrush et al. <|MaskedSetence|> The challenge is the differing arrangement of identical words across the pairs. Our evaluation spanned the entire dataset. VL-checklist Zhao et al. <|MaskedSetence|> <|MaskedSetence|>
**A**: (2022) Distinguishing itself by combining multiple sources, VL-checklist classifies 410,000 images into three categories. **B**: We analyzed a subset of 2000 images from each category to gauge our method’s effectiveness. . **C**: (2022) Designed to evaluate vision-language models, this dataset contains 400 instances with two image-text pairs per instance.
CAB
CAB
CAB
ACB
Selection 1
<|MaskedSetence|> First, in Section 2 we describe a variety of deep ReLU neural network constructions which will be used to prove Theorem 1. <|MaskedSetence|> Then, in Section 3 we prove Theorem 4 which gives an optimal representation of sparse vectors using deep ReLU networks and will be key to proving superconvergence in the non-linear regime p>q𝑝𝑞p>qitalic_p > italic_q. In Section 4 we give the proof of the upper bounds in Theorems 1 and 2. <|MaskedSetence|> We remark that throughout the paper, unless otherwise specified, C𝐶Citalic_C will represent a constant which may change from line to line, as is standard in analysis. The constant C𝐶Citalic_C may depend upon some parameters and this dependence will be made clear in the presentation. .
**A**: The rest of the paper is organized as follows. **B**: Many of these constructions are trivial or well-known, but we collect them for use in the following Sections. **C**: Finally, in Section 5 we prove the lower bound Theorem 3 and also prove the optimality of Theorem 4.
ABC
CAB
ABC
ABC
Selection 4
Table 3: 7scenes relocalization. The table shows the result of training on Scannet and testing on the 7Scenes test set for relocalization using 1,2 or all the mapped reference frames. <|MaskedSetence|> hard” refers to the [3D-3D] network in [3], fine-tuned over Scannet image pairs with a small overlap. <|MaskedSetence|> <|MaskedSetence|>
**A**: DSAC* is a single scene absolute pose regression network. For this experiment, we also report the precision (Prec.) at the position error of 25⁢cm25cm25\mathrm{~{}cm}25 roman_cm, and rotation error of 5∘superscript55^{\circ}5 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT. **B**: We use blue to indicate the best result among the RPR methods, and orange to indicate the best result among the feature matching methods. . **C**: The errors are the average of the median error over all the scenes. ”RPR [3D-3D] f.t.
BCA
CAB
CAB
CAB
Selection 4
Traditional methods for solving the closed-loop optimal control rely on solving the corresponding Hamilton–Jacobi–Bellman (HJB) equation. It is notoriously difficult to solve those equations in high dimensions based on classical grid-based methods, due to the so-called curse of dimensionality (Bellman, 1957). To overcome this essential difficulty, there has been active research in recent years to approximate the control and value functions with other function approximators and proper objective functions to find the optimal approximators. <|MaskedSetence|> (2021); Kunisch et al. <|MaskedSetence|> (2002), and neural networks Han and E (2016); Nakamura-Zimmerer et al. (2021); Böttcher et al. (2022). Particularly, neural networks have received significant interest due to their exceptional ability in high-dimensional function approximation and their flexibility in optimization with various loss functions. For instance, it is feasible to employ loss functions linked to the classical Hamilton-Jacobi-Bellman (HJB) equation Kunisch et al. (2023) or the Pontryagin principle Meng et al. (2022); Onken et al. <|MaskedSetence|> In this work, we focus on neural network-based methods and loss functions that do not require the previously mentioned condition. The progress made in this direction can be categorized into two families of methods: offline supervised learning and online direct policy optimization. We introduce their details in the following subsections accordingly. .
**A**: (2023), sparse grids Kang and Wilcox (2017), kernel functions Weston et al. **B**: Notable examples of such approximating functions include sparse polynomials Azmi et al. **C**: (2022), especially when the Hamiltonian minimization can be explicitly solved.
BAC
BAC
BAC
BAC
Selection 1
<|MaskedSetence|> time for the PCQR-1 probability lower and upper bound predictions in the Starcraft 2 and Tamarisk domains. ECE is measured with 30 equal-width bins. Standard deviations are depicted by the semi-transparent regions. Figure 2: Reliability diagrams for the PCQR-1 probability lower bound predictions in the Starcraft 2 and Tamarisk domains. Data from all data partitions and time steps are binned into 30 equal-width bins according to the predicted coverage probabilities. The gray histograms give the bin counts. The x-axis depicts the mean predicted coverage probability in each bin. <|MaskedSetence|> <|MaskedSetence|>
**A**: The y-axis shows the observed coverage rate in each bin. **B**: Figure 1: Mean Expected Calibration Error (ECE) across all k𝑘kitalic_k data partitions vs. **C**: The blue line corresponds to the experimental results, and the black dashed line reflects perfect calibration..
BAC
BAC
BAC
BAC
Selection 3
<|MaskedSetence|> Note that in this example P2P [8] receives a high mIoU score even though the edited objects are incorrectly scaled or cut off. <|MaskedSetence|> People were asked to compare images edited by our method versus a baseline in anonymized and randomized order and rate (1) shape faithfulness, (2) text alignment, (3) image realism. In our final evaluation comparing Ours vs. P2P vs. <|MaskedSetence|>
**A**: By weighting each sample’s mIoU with the percentage of correct keypoints (PCK) to compute the KW-mIoU, we can measure shape faithfulness more reliably. Figure 13: Screenshot of our annotator evaluation. **B**: Figure 12: Comparison between mIoU and Keypoint-Weighted mIoU (KW-mIoU). **C**: P2P + Shape they also rated (4) best overall edit..
BAC
CAB
BAC
BAC
Selection 1
To provide an extensive experimental study, we collected 15 diverse classification data sets from the public and open data repository OpenML [48]. These datasets span various domains, ensuring different representations of real-world scenarios. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> For transparency and reproducibility, all datasets used in our experimental evaluation are publicly available 333https://www.kaggle.com/datasets/up201204722/original. .
**A**: Also, there is an average of 28 numerical and two nominal attributes. **B**: Common characteristics include: i) binary class target, ii) number of instances between 1.043 and 15.545, and iii) number of features greater than five and less than 103. **C**: The average number of instances of the retrieved data sets is 4.916.
ACB
BCA
BCA
BCA
Selection 3
<|MaskedSetence|> [1] developed a simple self-supervised framework trained on pseudo labels without the demand of extra training rounds. Zhang et al. <|MaskedSetence|> [24] improved the performance of segmentation models by utilizing pseudo labels and introducing a powerful transformer-based backbone. Later, Hoyer et al. <|MaskedSetence|> [17] presented a new prompt-based approach to zero-shot unsupervised domain adaptation..
**A**: [70] introduced a self-training approach that is able to denoise pseudo labels and learns structural information by enforcing the consistency between augmentations. Hoyer et al. **B**: further improved their approach by using multi-resolution cropped images [25] and masked image consistency learning strategy [26] to enhance contextual learning. Fashes et al. **C**: Araslanov et al.
CAB
CAB
CAB
CBA
Selection 3
From the transmitter’s perspective, the semantic decoder at the receiver is dictated by the interpretation of the agreed language. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Therefore, for the mother, semantic encoding is the problem of choosing the best messages to express her meaning such that the meaning can be precisely conveyed to the son. A practical example of semantic encoding is video compression, wherein international standards typically specify only the decoder [27]. The manufacturers are free to design the video encoders as long as the encoded video can be decoded by the standardized decoder. While we will assume that the stochastic semantic channel and interpretation mappings are known by the encoder in this work, a data-driven approach where this mapping can be learned through interactions is left as a potential future extension. .
**A**: The mother knows the way her son would interpret her message. **B**: For a figurative analogy, consider a mother talking to her son. **C**: The semantic encoding problem is thus how to encode the intended meaning using messages to minimize the misinterpretation at the receiver, considering the semantic channel as well as the given semantic decoder.
CBA
CAB
CBA
CBA
Selection 1
Previous learning paradigms, such as single-label learning (SLL) and mutli-label learning (MLL), address the fundamental question of which label describes the instance. <|MaskedSetence|> They are not suitable for some real applications in which the overall distribution of the labels matters. Besides, real-world data with natural measures of label importance exist. Motivated by the above facts, label distribution learning was formally proposed by Geng in 2016 [19], which is a more general learning framework than SLL and MLL. <|MaskedSetence|> <|MaskedSetence|> The emotion distribution learning method was proposed to output the intensity of all basic emotions on a given image [21]. This method addresses the issue of treating the facial expression in an image as only a single emotion. One unified framework with a lightweight architecture was designed to jointly learn age distribution and regress age using the expectation of age distribution [20]. This approach alleviates the high computational cost of large-scale models and the inconsistency between the training and evaluation phases. Motivated by inaccurately annotated landmarks, the soft facial landmark detection algorithm was developed in [22]. It associates each landmark with a bivariate label distribution (BLD), learns the mappings from an image patch to the BLD for each landmark, and finally obtains the facial shape based on the predicted BLDs..
**A**: However, neither SLL nor MLL can directly handle further questions with more ambiguity. **B**: It concentrates on the ambiguity on the label side to learn the latent distribution of the labels. **C**: Generally, the label distribution involves a certain number of labels, each describing the importance to the instance. LDL has been adopted in a wide range of tasks.
CBA
ABC
ABC
ABC
Selection 4
<|MaskedSetence|> In contrast to previous efforts [11, 12, 13] our approach addresses the training mismatch issue, which emerges when samplers and reconstructors are optimized as isolated subproblems rather than focusing on the comprehensive joint problem. <|MaskedSetence|> <|MaskedSetence|> This problem can be solved by alternately optimizing the sampler (with the reconstructor kept constant) through RL, and the reconstructor (with the sampler kept constant) through back-propagation. Our experiments on fastMRI dataset [14] demonstrates L2SR’s capability to enhance MRI reconstruction quality. The main contributions of this paper are:.
**A**: The final reconstruction occurs only at the end, leading to a unique joint sampler-reconstructor optimization problem. **B**: Unlike dense-reward POMDPs, our sparse-reward formulation uniquely disentangles the sampling and reconstruction stages by deliberately eliminating intermediate reconstructions during the sampling process. **C**: In this paper, we propose a novel alternating training framework, L2SR (Learning to Sample and Reconstruct), which incorporates an innovative sparse-reward POMDP formulation to facilitate the joint optimization of MRI sampling policies and reconstruction models.
CBA
CBA
CBA
BCA
Selection 1
<|MaskedSetence|> Most segmentation models and some detection models use the transformer encoder as a backbone for feature extraction, pairing it with different decoders and task-specific heads to produce the final output [77, 18, 7, 31, 32, 33]. Other detection models models employ a transformer encoder-decoder structure, often with a CNN backbone to extract visual features [13, 14, 16, 15, 25, 31, 32, 33]. <|MaskedSetence|> Throughout the model, layers progressively increase the embedding dimension and reduce the spatial dimensions to capture details at various image resolutions for higher accuracy. <|MaskedSetence|> However, many modern vision transformers have incorporated convolutions with the transformer architecture for better accuracy results [78, 7, 17, 18, 13, 14, 16, 15, 25, 22]..
**A**: Since we analyze the first type of models with our segmentation model case studies, we focus on the second type of models for our detection model case studies. Following from ViT, the first vision transformer, models reshape 2D images into a 1D sequence of flattened image patches, which are linearly projected into an embedding dimension [20]. **B**: ViT is convolution-free since the original transformer layers consist of multi-head self-attention layers and multi-layer perceptrons (MLPs) [34]. **C**: State-of-the-art models for these tasks have shifted to using transformer-based architectures, building on the success of using transformers for natural language processing [34, 35] and image classification [20, 21, 24].
BCA
CAB
CAB
CAB
Selection 4
Our SAMG problem cannot be solved by the existing work in the Decentralized Partially Observable Markov Decision Process (Dec-POMDP) (Bernstein et al., 2002; Oliehoek et al., 2016) as shown in Fig. 2. <|MaskedSetence|> <|MaskedSetence|> Moreover, agents usually cannot get the true state s𝑠sitalic_s in Dec-POMDP, while in SAMG, the true state s𝑠sitalic_s is known by the adversaries. Adversaries can take the true state information and use it to select state perturbations for the MARL agents. The following proposition 3.2 shows that under a fixed adversarial policy, the SAMG problem becomes a Dec-POMDP. However, in SAMG the adversary policy is not a fixed policy, it may change according to the agents’ policies (see Theorem 4.1 for detail) and always select the worst-case state perturbation for agents. <|MaskedSetence|> We also give a two-agent two-state SAMG that cannot be solved by Dec-POMDP in Appendix A. Proposition 3.2..
**A**: In contrast, the policy in SAMG needs to be robust under a set of admissible perturbed states. **B**: The proof of proposition 3.2 is in Appendix A. **C**: The adversary aims to find the worst-case state perturbation policy χ𝜒\chiitalic_χ to minimize the MARL agents’ total expected return, but the Dec-POMDP cannot characterize the worst-case state perturbations.
CAB
ACB
ACB
ACB
Selection 2
Evolution of embeddings To prove the effectiveness of iterative friend network reconstruction, we show the cosine similarity deviation distribution at 1st, 1,000th, 5,000th, and 10,000th epoch in DOW30 and Yacht with missing ratio 0.3. Fig.5 shows that the quality of generated embeddings gradually improves during the training process. <|MaskedSetence|> <|MaskedSetence|> The similarity of node embeddings at the 5,000th epoch is already highly close to the ground truth. <|MaskedSetence|>
**A**: It verifies the effectiveness of the iterative friend network reconstruction.. **B**: The similarity deviation in the 1st epoch is relatively high with most distributed around 0.3∼similar-to\sim∼0.6 and a few less than 0.2. **C**: During training, the quality of the embeddings is much improved.
BCA
BCA
ACB
BCA
Selection 4
In this paper, we focus on designing Universal Perturbations for Interpretation (UPI) as universal attacks aimed to change the saliency maps of neural nets over a significant fraction of input data. To achieve this goal, we formulate an optimization problem to find a UPI perturbation with the maximum impact on the total change in the gradient-based feature maps over the training samples. We propose a projected gradient method called UPI-Grad for solving the formulated optimization problem. Furthermore, in order to handle the difficult non-convex nature of the formulated optimization problem, we develop a principal component analysis (PCA)-based approach called UPI-PCA to approximate the solution to this problem using the top singular vector of fast gradient method (FGM) perturbations to the interpretation vectors. <|MaskedSetence|> Finally, we demonstrate our numerical results of applying the UPI-Grad and UPI-PCA methods to standard image recognition datasets and neural network architectures. <|MaskedSetence|> <|MaskedSetence|> We can summarize the contributions of this work as follows:.
**A**: We demonstrate that the spectral UPI-PCA scheme yields the first-order approximation of the solution to the UPI-Grad optimization problem. To implement the UPI-PCA scheme for generating universal perturbations, we propose a stochastic optimization method which can efficiently converge to the top singular vector of first-order interpretation-targeting perturbations. **B**: The empirical results show the satisfactory convergence of the proposed stochastic optimization method to the top singular vector of the attack scheme, and further indicate the proper generalization of the designed attack vector to test samples unseen during the optimization of the universal perturbation. **C**: Our numerical results reveal the vulnerability of commonly-used gradient-based feature maps to universal perturbations which can significantly alter the interpretation of neural networks.
ACB
ACB
CBA
ACB
Selection 4
We also report a classification model based on the proposed heterogeneous structure and leveraging the boosting procedure that, although is not amenable to be trained on current hardware, was trained using a quantum-inspired optimizer based on Tensor Networks. <|MaskedSetence|> Going forward, hardware upgrades in terms of qubit numbers will lead to performance improvements. This behavior is illustrated in Fig. 8, where we show how the precision of the quantum classifiers evolves with system size. Extrapolating from these results and keeping other factors constant, the proposed quantum classifiers could outperform the benchmarked model within a few hundred addressable qubits. <|MaskedSetence|> <|MaskedSetence|>
**A**: This model showed the capability to already perform better across all the relevant metrics, achieving a precision of 29%percent2929\%29 %, 1%percent11\%1 % above the benchmark, with just 90909090 learners (against 1200120012001200) and runtimes of around 20202020 minutes compared to more than 3333 hours for the benchmarked Random Forest. **B**: In addition, hardware improvements enabling the resolution of QUBOs with negative off-diagonal values could offer additional advantages to the quantum solution and improve performance over the classical benchmark. **C**: .
ABC
ABC
ABC
BAC
Selection 2
<|MaskedSetence|> The leftmost two columns correspond to the image and ground truth semantic segmentation; the third column shows the predictions of Dilated-FCN+GCNET [17]; the four column shows predictions from Maskformer [19] and the final column shows predictions of our SGR component on top of Dilated-FCN. <|MaskedSetence|> Figure E: Qualitative results for downstream tasks of object detection and instance segmentation on MS-COCO. The leftmost two columns correspond to the image and ground truth object locations and their corresponding segmentation; the center column shows the predictions of using our backbone; the rightmost two columns show predictions from using Res101-C4 and Res101-GloRE [17] backbones. <|MaskedSetence|>
**A**: Red rectangles on the images indicate locations where the models failed to correctly segment the pixels. **B**: . **C**: Figure D: Qualitative results on Cityscapes.
CAB
CAB
CAB
BAC
Selection 1
Time-space fractional diffusion problems involving spectral fractional Laplacian have attracted considerable attention and been widely studied in pasted decade. But all along, nonlocal nature of time-space fractional differential operators brings scholars great challenges in aspects of analysis and simulation. In [17], a Caffarelli-Silvestre extension turned problem (1) with γ=0𝛾0\gamma=0italic_γ = 0 into a quasistationary elliptic problem with dynamic boundary condition, which brought one extra dimension and a degenerate weight. <|MaskedSetence|> Through Dunford-Taylor integral representation, finite element discretization using sinc quadrature for spectral fractional Laplacian was proposed in [21, 19, 22, 23]. <|MaskedSetence|> But it is difficult to derive a similar finite element error estimate developed in [24, Theorem 4.1] for time-space fractional diffusion problems. <|MaskedSetence|> However, the use of matrix transfer technique will encounter a thorny problem that how to treat fractional power of matrix obtained from spatial discretization, which usually leads to high computational cost and thus is rather time-consuming. .
**A**: Recently, a best uniform rational approximation was found for solving spectral fractional elliptic problems [24, 25, 26]. **B**: In addition, there were also a family of wildly used numerical methods combining with matrix transfer technique, such as finite difference methods [27, 28, 29, 30], FEMs [28, 31, 32, 33], and also spectral and spectral element methods [34]. **C**: The finite element method (FEM) and spectral method based on the Caffarelli-Silvestre extension were introduced in [18, 19, 20].
CAB
ABC
CAB
CAB
Selection 4
Limitations. <|MaskedSetence|> The former refers to the required computation time to render all views. However, this does not threaten interactivity as long as everything gets parallelized and/or pre-computed beforehand [15]. For the latter case, he pointed out the tool’s limitation to visualize a much larger data set with more difficult-to-predict instances due to the increased space demand for the zone-based matrix. A simple solution to this problem could be filtering, which applies to scenarios where some metamodel pairs are performing poorly. <|MaskedSetence|> E1 referred to the important role that metamodels’ confidence plays in the data exposition, but instead of being aggregated as in our tool, it could be beneficial to use individual visual representations of spread. He continued to say that it is necessary to visualize the data distribution on demand to better relate to the underlying explanation of why some instances are constantly misclassified. <|MaskedSetence|>
**A**: In the future, we plan to improve MetaStackVis to overcome such limitations. . **B**: As E2 stated, the tool works solely with binary classification problems and does not support alternative hyperparameter optimization techniques [11]. **C**: Efficiency and scalability were two concerns raised by E4.
CBA
BAC
CBA
CBA
Selection 3
6.2 Advantages of the WOPSIP method The dG scheme based on the WOPSIP method has advantages on anisotropic mesh partitions. One key advantage of the WOPSIP method is that it does not require any penalty parameter tuning. The Stokes element proposed in (2.25) satisfies the inf-sup condition. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: This enables meshes with hanging nodes, whereas handling those meshes in the classical CR nonconforming finite element might be difficult. **B**: Another advantage of the method is that the error analysis of the technique is studied on more general meshes (BreOweSun12 ; BreOweSun08 ; Bre15 ; BarBre14 ) than conformal meshes. **C**: Here, we briefly treat the WOPSIP method on nonconforming meshes in our framework. .
BAC
BAC
BAC
ABC
Selection 1
Multi-Modal Dialogue Dataset. In the visual dialogue domain, most previous studies are divided into two categories depending on whether the image is grounded or sharing in the dialogue. The image-grounded dialogue task aims to answer questions Antol et al. <|MaskedSetence|> (2017); Seo et al. (2017); Kottur et al. <|MaskedSetence|> <|MaskedSetence|> (2018); Meng et al. (2020); Wang et al. (2021b); Zheng et al. (2021) about given images..
**A**: (2019) or generate natural conversations Mostafazadeh et al. **B**: (2017); Shuster et al. **C**: (2015); Das et al.
CAB
CAB
CAB
CAB
Selection 4
Mixing is an augmentation technique that combines pixels of two training images to create highly perturbed samples. It has been utilized in classification (Berthelot et al., 2019; Yun et al., 2019; Hongyi Zhang and Lopez-Paz, 2018) and semantic segmentation (French et al., 2020; Olsson et al., 2021). Especially in semi-supervised learning of semantic segmentation, mixing methods, such as CutMix (Yun et al., 2019) and ClassMix (Olsson et al., 2021), achieved promising results. <|MaskedSetence|> DACS (Tranheden et al., 2021) adapts ClassMix (Olsson et al., 2021) for domain adaptation by mixing across domains. In source-free domain adaptation, access to the source domain dataset is not available. <|MaskedSetence|> <|MaskedSetence|> In order to utilize mixing, we propose an metric based online ClassMix method where patches are sampled based on our reliability metric. .
**A**: This makes it impossible to combine target images with source patches. **B**: While the former cut rectangular regions from one image and paste them onto another, the latter use a binary mask belonging to some classes to cut. **C**: Additionally, the absence of labels in the target domain dataset presents a challenge when trying to use ClassMix.
BAC
BAC
BAC
BAC
Selection 1
4.3 Causal Recommendation Data-driven recommender systems achieve great success in large-scale recommendation scenarios Wule-NeuralRec . <|MaskedSetence|> The reason is thought of as the lack of modeling causality to avoid capturing spurious correlations PDA ; wenjie-ood . Many efforts try to incorporate causality into neural recommendations to overcome these drawbacks PDA ; wenjie-ood ; IPW-saito . There are mainly two types of work. The first line of research is based on the potential outcome framework IPW-saito ; rubin2005causal , where IPS IPW-saito and doubly robust DR are utilized to achieve unbiased recommendation. Another line of research is based on the structural causal model pearl2009causality ; PDA ; wenjie-clickbait ; wenjie-ood . The existing efforts usually analyze the causal relationships with causal graphs and then estimate the target causal effect with the intervention PDA or counterfactual inference wenjie-ood ; wenjie-clickbait for debiasing, fairness, or OOD generalization. <|MaskedSetence|> <|MaskedSetence|>
**A**: Nevertheless, all previous methods do not consider dealing with the spurious correlation issues for SSL. **B**: However, recent work finds that they face various biases PDA ; IPW-saito ; wang2021deconfounded , unfairness fairness , and low OOD generalization ability issues wenjie-ood . **C**: Besides, to the best of our knowledge, existing work in recommendation does not take invariant learning IRM ; HRM to remove spurious correlations..
ABC
BAC
BAC
BAC
Selection 3
The rest of this paper is organized as follows. <|MaskedSetence|> Then, we propose a primal-dual alternating proximal gradient (PDAPG) algorithm for nonsmooth nonconvex-(strongly) concave minimax problem with coupled linear constraints, and then prove its iteration complexity. <|MaskedSetence|> Numerical results in Section 4 show the efficiency of the two proposed algorithms. <|MaskedSetence|>
**A**: In Section 2, we first establish the strong duality with respect to y𝑦yitalic_y under some feasibility assumption for nonconvex-concave minimax problem (P) with linearly coupled equality or inequality constraints. **B**: In Section 3, we propose another primal-dual proximal gradient (PDPG-L) algorithm for nonsmooth nonconvex-linear minimax problem with coupled linear constraints, and also establish its iteration complexity. **C**: Some conclusions are made in the last section. .
ABC
ABC
ACB
ABC
Selection 2
Although PointNet++ [2] was effective in capturing local geometric information, it still lost contextual information due to the max-pooling operation. Several methods [2, 3, 4, 8, 9, 10, 13] since PointNet++ have used raw x,y,z𝑥𝑦𝑧x,y,zitalic_x , italic_y , italic_z point coordinates as input. <|MaskedSetence|> <|MaskedSetence|> From the queried neighborhood points, subtracting the anchor FPS point gives directional vectors of the neighborhood. <|MaskedSetence|> After this step, the directional vectors and the computed distance (in the case of ball querying) are ignored. .
**A**: Secondly, group a small sample of neighborhood points at each FPS point based on this distance. **B**: First, sample farthest points (FPS) from the input points, and at each FPS point, query neighborhood points using a fixed radius ball, i.e., compute the distance to each point from each of the FPS/anchor points to check if the point is within the neighborhood of the FPS point. **C**: Finally, these directional vectors are mapped to higher dimensions in each set abstraction layer (SA) to get local features.
CAB
BAC
BAC
BAC
Selection 4
It was observed in several papers [29, 49, 55] that the 2D locations of depth discontinuities, which we refer to as depth edges in this paper, are often poorly estimated, resulting in thick smooth depth gradients or incorrectly-localized edges (see the baseline in Fig. 1). Some applications that use predicted depth maps are highly sensitive to errors in depth edges, which are often part of the silhouette of objects. <|MaskedSetence|> In NVS methods that use depth explicitly [6], these type of localization errors in the 2D location of depth edges, may result in wrongly localized object parts in another newly generated view. <|MaskedSetence|> <|MaskedSetence|>
**A**: One example of such an application is Novel View Synthesis (NVS) – generating a new view of a scene captured from one or more views. **B**: When relying on inaccurate depth edges in the computation of the occlusion, significant artifacts and unrealistic appearance may occur (see Fig. 1). . **C**: Another important application that often uses predicted depth is virtual object rendering for AR [27], which computes for each pixel the closest occluding object from the point of view of the user.
ACB
CAB
ACB
ACB
Selection 3
Semantic Discriminator  Based on the perceptual loss model, we insert the semantic discriminator (sem. <|MaskedSetence|> As shown in Tab. IV, the semantic discriminator improves the FID, which is consistent with the improvement in object generation, such as faces. <|MaskedSetence|> D). We observed that integrating only the object discriminator into our base model leads to a marginal improvement in FID scores. However, a combination of the object and semantic discriminators yields the best performance. <|MaskedSetence|> The visual results presented in Figs. 3 and 8 demonstrate the significant improvements in image quality brought about by the object-level and semantic discriminators..
**A**: This suggests that additional semantic information may be essential for the object-level discriminator to effectively discern object structure. **B**: D) at the image-level only for model training. **C**: However, the semantic discriminator model still suffers from object distortion. Object-level Discriminators  We also evaluated the impact of object-level discriminators (obj.
BCA
BCA
BCA
BCA
Selection 4
[Stacked barplots for influence of factors on reaction]Stacked barplots representing the influence of different factors on participants’ reactions. There are 21 bars in total, separated into 7 sections. Section 1 represents the influence of metadata, section 2 represents content, and section 3 the design of the notification. Section 4 represents prior experience with such notifications, section 5 represents negative experience with security incidents. <|MaskedSetence|> <|MaskedSetence|> The first for the legitimate group, the second for malicious (no change), and the third for malicious (change). <|MaskedSetence|>
**A**: Section 6 shows the influence of whether participants thought the notification was phishing and section 7 whether the notification was expected. **B**: All sections show three separate bars. **C**: Answer choices are represented with different colors ranging from “no effect” on the left to “major effect” on the right. .
ACB
ABC
ABC
ABC
Selection 2
Texas-100X [15] contains demographic and medical information for patients across hospitals. The original dataset uses 100 possible classes for surgical procedure prediction. We slightly modify the task and focus only on data from the top 20 classes, reducing it to a 20-class classification task. We focus on the ratio of whites (race), females (sex), and Hispanics (ethnicity) as properties, and use a two-layer feed-forward neural-network. CelebA [21] contains collections of face images of celebrities. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> For our experiments with feature extractors, we also conduct experiments where the adversary uses a pre-trained FaceNet [28] model trained on the CASIA-WebFace [43] dataset, with a two-layer network. It leads to a drop in performance (from ∼92%similar-toabsentpercent92\sim 92\%∼ 92 % to ∼82%similar-toabsentpercent82\sim 82\%∼ 82 %), but the point of such an experiment is indeed to assess inference risk in more practical settings. .
**A**: Each image is annotated with attributes. **B**: We conduct experiments with a convolutional neural network trained from scratch for this dataset, with five convolutional layers and pooling layers followed by three linear layers, which is the smallest network we could find with reasonable task accuracy. **C**: We use three different tasks: smile detection, gender prediction, and mouth-open prediction.
CAB
ACB
ACB
ACB
Selection 4
To the best of our knowledge, not much work has been done on reduced order methods for blasting. <|MaskedSetence|> Local ROMs based on time-domain partitioning are particularly suited for problems characterised by different physical regimes and fast development, such as blast waves. We apply a first reduction layer which consists of a piecewise-POD in time, followed by an Autoencoder (AE) which works as a second non-linear reduction step. Finally, we train a Deep Forward Neural Network (DFNN) to learn the latent space dynamics. <|MaskedSetence|> <|MaskedSetence|>
**A**: The offline procedure is summarised in Figure 2. The novelty of this work lies in the use of deep learning techniques combined with a local approach and the application to a large-scale system representing fast and transient dynamics. **B**: For example, Xiao et al. [42] used a POD-RBF scheme for solids interacting with compressible fluid flows focusing on crack propagation. In this work, we introduce a piecewise POD-DL scheme which combines a local POD approach and deep learning methods able to deal with fast and transient phenomena involving fluid-structure interactions such as blast waves in the vicinity of buildings. **C**: It should be noted that the code repository associated with this paper is available at: https://github.com/Mardgi/sissa-jrc.
BAC
BCA
BAC
BAC
Selection 4
<|MaskedSetence|> In [PKL22], the authors established sharp characterizations of injectivity of fully-connected and convolutional ReLU layers and networks. <|MaskedSetence|> Such functions commonly arise in solution algorithms for inverse boundary value problems. See also [AVLR21] for more results in this direction. There is a natural comparison between previous results and ours. Here we perform a more functional-analysis approach to the problem, seeking for the full approach or resolution to the Calderón’s mapping instead of resolving for/from some particular inputs. This method will have advantages since we overcome the classical nearly-unstable character of Inverse Problems. On the other hand, our approach also has disadvantages: is less explicit in nature, highly theoretical sometimes; and we do not provide a specific algorithm to find the DeepONet. <|MaskedSetence|>
**A**: Additionally, the authors in [dHLW21] developed a theoretical analysis for special neural network architectures, for approximating nonlinear functions whose inputs are linear operators. **B**: However, we have been able to provide a suitable form to compute it in some particular cases.. **C**: From a more mathematical point of view, there are several important recent results for Inverse Problems in terms of NNs.
CAB
CAB
CAB
BCA
Selection 3
3.4.1 CAN Protocol In small and medium-sized UAVs, the UAVCAN protocol operates on top of the CAN protocol. CAN operates at the first and second layers of the OSI model and is characterized by features such as serial communication, multimaster capability, and multicast support. If the bus is idle, any node can send a message, and all nodes can receive the transmitted message. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The payload size of CAN is limited to a maximum of 8 bytes..
**A**: CAN was introduced in 1986, and CAN 2.0A was released in 1993, followed by CAN 2.0B in 1995. **B**: CAN messages can be up to 8 bytes in length. **C**: UAVCAN operates on CAN 2.0B, which, unlike CAN 2.0A, supports two different identifier lengths: 11 bits and 29 bits.
ABC
BAC
BAC
BAC
Selection 3
With the camera pose estimation from SAM, in the final registration module, we use a camera pose selection strategy to obtain the camera registration and subject fusion scheme to get unified subject registration in the BEV. We summarize the main contributions in this work: ❶ To the best of our knowledge, this is the first work to study the camera and subject registration for the multi-view multi-human scene, in which we alternately achieve the camera and human registration results in a unified BEV. <|MaskedSetence|> ❷ We propose a novel solution for this problem, in which we integrate the deep network-based VTM and a multi-view geometry-based SAM. <|MaskedSetence|> Extensive experimental results on this dataset show the superiority of the proposed method and the effectiveness of the key modules. <|MaskedSetence|>
**A**: This framework integrates both the generalization of the deep network for the human localization task and the stability of the classical geometry for the camera pose estimation task. ❸ We build a new large-scale synthetic dataset for the proposed problem. **B**: Furthermore, the cross-domain study on the real dataset verifies the generalization of our method.. **C**: This work breaks the limitations of using pre-given camera calibration or real BEV images in previous works.
CAB
CAB
ACB
CAB
Selection 2
Probing (Rogers et al., 2020; Conneau et al., 2018; Zhao et al., 2020) is one of the most prominent techniques widely leveraged for interpretability. Probing analysis aims at diagnosing which types of regularities are encoded in a representation extracted from data. The basis of probing is that if a simple classifier, e.g., a linear classifier, built upon the representations can solve a task sufficiently well, then the representations should contain informative features about the task already. Recent works strive to analyze code pre-trained models via probing. Wan et al. (2022) evaluated if pre-trained models learn programming language syntax, and they measured the number of edges among node tokens at AST, and tried to learn this distance in the vector space. This approach cannot recover the AST structure, given the distances among all nodes. Although the number of edges between nodes can reflect the syntax information to some degree, it still has some problems. First, it cannot reconstruct AST structures in the vector space, which means that it partially checks the code syntax. Second, two tokens with similar syntax have a small number of edges, but the small number of edges does not imply syntax closeness as shown in the motivation section. Hernández López et al. <|MaskedSetence|> <|MaskedSetence|> However, the syntax distance for natural language may not be suitable for code data since Allamanis et al. (2018) list the difference between the code and the natural language, including the differences in syntax tree between the natural language and the code. In contrast, our approach is concise and efficient, and we directly recover AST structures from the vector space. In addition, we conduct the semantic analysis for the code. Troshin and Chirkova (2022) developed a group of probing tasks to check if pre-trained models learn the code syntax structure and data flow information. First, this work does not consider the whole structure of AST and considers partially code syntax. Second, this work does not consider control-flow and control-dependency semantics. Shen et al. <|MaskedSetence|> However, they lack semantic analysis and only include four syntax types: assignment, function calling, if statements, and loop structures..
**A**: (2022) extract the syntax sub-structure and predict their syntax relation. **B**: This work converts AST to a middle-format binary tree and then learns a subspace utilizing the syntax distance (Shen et al., 2018) that is designed for natural language. **C**: (2023) analyzed pre-trained models in a global-level AST by projecting AST into a subspace.
CBA
CBA
BAC
CBA
Selection 4
Setup. <|MaskedSetence|> (Because MSCOCO does not provide captions for their test set, a portion of the validation set is used as a held-out test set.) Images in the MSCOCO dataset are each associated with 5 captions. <|MaskedSetence|> <|MaskedSetence|> Captions greater than 45 words in length are removed. For LC-PCFG, we preprocess the dataset by extracting token-level embeddings for each caption from the last layer of an LLM. Evaluation..
**A**: For models using word embedding matrices, the most frequent 10,000 words (based on white-space tokenization) are maintained with all other words mapped to a special UNK token. **B**: We follow the experimental setup of Zhao and Titov (2020), evaluating on the same splits of the MSCOCO 2014 dataset (Lin et al., 2014). **C**: The final dataset consists of 82,783 training, 1,000 validation, and 1,000 test images. During preprocessing, all sentences are converted to lowercase and numbers are replaced with the letter "N".
CAB
BCA
BCA
BCA
Selection 2
<|MaskedSetence|> A data-driven approach to solve the problem is proposed by collecting recordings of IMU from multiple drives with the IMU sensor strapped to a car at a known yaw mounting angle of zero degrees. <|MaskedSetence|> While data is collected with an external IMU sensor at a prescribed angle, the trained model is deployed and tested on an Android mobile device to demonstrate generalization. The model is validated and tested on data with real and simulated rotations. <|MaskedSetence|>
**A**: In this work, a DNN model (more specifically, a CNN) with a smoothing algorithm for real-time deployment is developed which receives as input a window of IMU measurements and outputs the estimated sensor yaw mounting angle. **B**: To summarize, the following contributions are made . **C**: To create a rich training dataset, the recorded IMU samples are rotated to simulate a wide range of mounting angles.
ACB
ACB
ACB
ABC
Selection 3
7.2 Correlation and Summarization of Categories Correlation between categories.  We have conducted a correlation analysis for the categories used in our collected survey data set. Individual visualization papers were treated as observations, and categories (cf. Table 3 and S5) were treated as dimensions/variables. Linear correlation analysis was then used to measure the association between pairs of categories. <|MaskedSetence|> <|MaskedSetence|> Due to the extensive size of the correlation matrix, we include only a thumbnail of it and refer the reader to S7 for more detail. <|MaskedSetence|>
**A**: The resulting matrix in Figure 6 contains Pearson’s r coefficient values and reveals specific patterns and intriguing cases of positive (green) and negative (red) correlation between categories. **B**: Since the interpretation of the coefficient values seems to differ in the literature [Coh88, Eva96, Tay90], we focus on values of correlations that appear interesting to us despite a potentially strong or weak correlation level. **C**: In Figure 6, we present some strong, medium, and weak correlation cases that caught our attention. .
CBA
ABC
ABC
ABC
Selection 4
<|MaskedSetence|> 3 for comparison). Moreover, we also see that between these model variants, the lower the CAB score is, the higher the accuracy on VQA-v2 is. Does this relationship hold more generally? To see this, we prepare three other baselines: A. CLIP image encoder + BERT text embeddings + Transformer head, fine-tuned altogether; B. <|MaskedSetence|> <|MaskedSetence|> The same as A. but uses V&L pre-trained weights before fine-tuning..
**A**: The results are shown in Table 6. We can see that adding deeper modality interaction reduces CAB (See Fig. **B**: The same as A. **C**: but with the CLIP image encoder frozen; C.
ABC
ABC
BAC
ABC
Selection 1
One of the key topics for electric ride-hailing markets is the charging and operational strategies for EVs. Many studies focus on capturing the distinct driving patterns of ride-hailing EV drivers. For instance, Ke et al. [20] modeled drivers’ behavior in a ride-sourcing market with EVs and gasoline vehicles, developed a time-expanded network to sketch out drivers’ working and recharging schedules, and found that passengers, drivers, and the platform are all worse off when the charging price increases. Qin et al. <|MaskedSetence|> Alam [22] emulated the daily trip patterns of ride-sourcing EVs using an optimization-based method and investigated the induced charging demand using agent-based simulation. Besides, a growing body of literature highlights the operations of electric ride-hailing markets. Shi et al. [23] formulated a dynamic vehicle routing problem (VRP) to capture the operations of a ride-hailing EV fleet and solved the problem using reinforcement learning. Kullman et al. [24] also employed deep reinforcement learning to develop operational polices for ride-hailing EVs, including vehicle assignment, charging, and repositioning decisions. Pricing strategies are considered in [25], where Hong and Liu examined the optimal price and wage of green ride services while taking into account the opportunity cost of EV drivers as well as passengers’ willingness to pay for eco-friendly trips. Ding et al. [26] considered the integration of ride-hailing and vehicle-to-grid (V2G) services and developed a game-theoretic framework to characterize the market equilibrium. Cai et al. [27] investigated the operations of integrated ride-sourcing and EV charging services in a nested two-sided market framework. Furthermore, related research is expanded into shared autonomous electric vehicles (SAEVs) [28][29][30][31]. Among others, Turan et al. [29] considered pricing, routing, and charging strategies. Yi and Smart [30] focused on joint dispatching and charging management. Al-Kanj et al. [31] used approximate dynamic programming to develop high-quality pricing, dispatching, repositioning, charging, and fleet sizing strategies for the operations of an SAEV fleet. Another important topic is on policies aimed at promoting the electrification of the ride-hailing industry. Different policy options have been explored in the literature, with a primary focus on pricing incentives that includes subsidy for EV purchases, incentive schemes for EV rental, and financial support to infrastructure supply [4][10][32][33][34][35]. <|MaskedSetence|> [10] examined the market response and electrification levels under three different regulatory policies, including annual permit fees, differential trip-based fees, and differential commission caps. They indicated that the last policy is the most cost-efficient and can simultaneously benefit drivers and passengers. Slowik et al. [32] also found that well-designed taxes and fee structures can make EVs the most economically attractive technology for ride-hailing fleets. <|MaskedSetence|> And ensuring access to overnight charging is another crucial step to dismantling barriers to vehicle electrification. However, Mo et al. [33] showed that given a limited budget, governments should subsidize charging infrastructure before supporting EV purchase. In summary, all aforementioned works focus on the operational strategies and/or public policies for electric ride-hailing fleets, while the planning of charging infrastructure for the ride-hailing network is not explicitly considered..
**A**: [21] used a multi-state model to identify two distinct driving patterns for ride-hailing EVs, which outline when, for how long, and under what state of charge a driver will decide to recharge an EV. **B**: Particularly, Liu et al. **C**: As per [35], governments should shift the target of current subsidies to the most intensively-used vehicles and the people most in need of financial support, for instance, ride-hailing drivers.
ABC
ABC
ABC
BAC
Selection 3
Table 5. <|MaskedSetence|> The best results are in bold. <|MaskedSetence|> H is the reconstructed hand representation by MANO (Romero et al., 2017). H+P or H+R in the Signbert mean fusing the reconstructed hand representation (H) to the features of the Skeleton-based or RGB-based method. R+F denotes using RGB data and optical flow data as inputs. <|MaskedSetence|>
**A**: Comparison with the state-of-the-art fusion-based methods in terms of Top-1 and Top-5 accuracy on WLASL-2000 (Li et al., 2020c). **B**: . **C**: C denotes the five-clip ensemble method.
CBA
ACB
ACB
ACB
Selection 3
System requirements: Analyzing requirements to relate and map them to their corresponding architectural elements (e.g., architectural components) has become one of the major challenges faced by architects during the development process (Casamayor et al., 2012a). The failure of a high percentage of software projects is often caused by, for example, the lack of proper requirements analysis, incomplete requirement specifications, and changing requirements, among others (Hull et al., 2005). <|MaskedSetence|> For instance, in [S66], Gokyer et al. used an approach based on NLP and ML techniques for automatically mining Non-Functional Requirements (NFRs) expressed in plain text and mapping these NFRs to architectural elements, such as architectural components, in the problem domain. The mined architectural information can guide architects in making architectural decisions effectively. On the other hand, in [S43], Casamayor et al. <|MaskedSetence|> Then again Soliman et al. <|MaskedSetence|>
**A**: 31 out of 104 studies (29.8%) focus on mining the requirements of systems from repositories to assist architecting process (see Table LABEL:minedArchitecturalInfo). We further categorized the mined system requirements into three subcategories: Quality Attributes (QAs), functional requirement, and constraint by following the classification described in (Cervantes and Kazman, 2016). **B**: [S3] developed a Web-based search engine for searching several types of architectural information (including constraints) from Q&A sites. . **C**: presented an approach based on NLP and ML techniques to semi-automatically mine and analyze functional requirements (from the textual description of requirements) that will become the responsibilities of certain architectural components in the system, in order to help bridge the gap between requirements analysis and architectural design.
ACB
BCA
ACB
ACB
Selection 3
The remainder of this article is organized as follows. <|MaskedSetence|> II, we describe the three main stages of a quantum neural netwok, i.e., the data encoding, parametrization and measurement stages. In Sec. III, we present our theoretical results and briefly discuss their implications. In Sec. IV, we give details about our simulation method. With this, in Sec. <|MaskedSetence|> Finally, in Sec. <|MaskedSetence|>
**A**: In Sec. **B**: VI, we give our concluding remarks. II Quantum neural networks. **C**: V, we present the results obtained in our simulations.
ACB
ACB
BAC
ACB
Selection 1
Generalization of Deep Neural Networks. Our work is related to the vast body of works that analyze the generalization error of deep neural networks. See, e.g., Jiang et al. (2019); Valle-Pérez and Louis (2020) for a comprehensive introduction. <|MaskedSetence|> As a result, a direct application of such results yields a vacuous bound as input sizes increase. On the other hand, Sokolic et al. (2017); Sannai et al. <|MaskedSetence|> (2021) establish a generalization error that captures the improvement from invariance and equivariance, which, however, is not applicable to the attention mechanism. Our theoretical analysis of the generalization error follows the framework of Bartlett et al. <|MaskedSetence|> In addition, the concurrent work (Zhang et al., 2022a) provides a PAC-Bayes analysis of the generalization error of the attention mechanism in the context of multiagent reinforcement learning. Optimization of Deep Neural Networks. Our work is built on the vast body of works that analyze the optimization error of deep neural networks (Allen-Zhu et al., 2019a, c, b; Arora et al., 2019; Du et al., 2018, 2019; Zhang et al., 2019; Zou et al., 2018; Zou and Gu, 2019; Allen-Zhu et al., 2019c; Cao and Gu, 2019; Li and Liang, 2018; Chizat et al., 2019; Mei et al., 2018, 2019; Rotskoff and Vanden-Eijnden, 2018; Nguyen, 2019; Sirignano and Spiliopoulos, 2020). Most of them focus on overparameterized neural networks in the neural tangent kernel (Jacot et al., 2018) or mean-field regime (Mei et al., 2018). Our work analyzes the optimization error in the neural tangent kernel regime, which is similar to Malladi et al. (2022). Meanwhile, it is worth mentioning that our theoretical analysis of the approximation and generalization errors is not restricted to the neural tangent kernel regime. .
**A**: However, most of them do not exploit invariance and equivariance. **B**: (2017), which stems from Bartlett (1996); Bartlett and Mendelson (2002). **C**: (2021); Elesedy (2021); Zhu et al.
ABC
ACB
ACB
ACB
Selection 2
To evaluate the performance of our PRCA and DRSA methods at extracting relevant subspaces and producing disentangled explanations respectively, we perform experiments on the ImageNet [85] and Places365 [86] datasets. For ImageNet, we consider a subset of 50 classes333For ease of reproducibility and maximizing class coverage, we choose ImageNet classes with indices {0,20,40,…,980}02040…980\{0,20,40,\dots,980\}{ 0 , 20 , 40 , … , 980 }. <|MaskedSetence|> These models are two VGG16 [26] models—which are from the TorchVision (TV) [27] and NetDissect (ND) [51] repositories, denoted by VGG16-TV and VGG16-ND444We remark that VGG16-ND is our PyTorch version of the model (originally in Caffe [87]’s format) on which the relation between concepts and feature maps has been analyzed in [51], allowing for a more direct comparison between the previous work and our DRSA approach. The original model is available at http://netdissect.csail.mit.edu/dissect/vgg16_imagenet/. <|MaskedSetence|> For Places365, we consider a subset of seven classes555These Places365 classes are similar to the ones visualized in Ref. [47]’s Fig. 4. and the ResNet18 [90] model provided by Ref. <|MaskedSetence|> We use the implementation of Shapley Value Sampling from Captum [92]. Our LRP implementation for VGG16 is based on LRP-γ𝛾\gammaitalic_γ used in [19]. For NFNets, we contribute a novel LRP implementation (see Supplementary Note K)..
**A**: respectively—and the NFNet-F0 model (the smallest variant of a more recent architecture called Normalizer-Free Networks (NFNets) [88]) from PyTorch Image Models [89]. **B**: and three publicly available pre-trained models. **C**: [47]. We evaluate our proposed methods with Shapley Value Sampling—an approximation of the classic Shapley value—and LRP; these two attribution techniques are chosen based on patch-flipping experiments [91] (see Supplementary Note D).
BAC
BAC
BCA
BAC
Selection 2
<|MaskedSetence|> Though DEviS provides a means for directly learning evidential uncertainty without sampling, the uncertainty might not be totally calibrated to enhance the reliability of model predictions. <|MaskedSetence|> Also, calibrated uncertainty will also help in detecting the OOD data to caution AI researchers. More importantly, our CUP relied on the theoretically sound loss-calibrated approximate inference framework [21] as utility-dependent penalty term for obtaining well-calibrated uncertainty in medical domain. To this end, this paper introduces a loss function called CUP, which considers the relationship between accuracy and uncertainty in medical image segmentation. <|MaskedSetence|> This enables the backbone to improve segmentation performance, in addition to learn to provide well-calibrated uncertainty. The CUP is defined as: .
**A**: This is an optimization method used to calibrate uncertainty, aiming to maintain lower uncertainty for accurate predictions and higher uncertainty for inaccurate predictions during the training process, thereby achieving well-calibrated uncertainty. **B**: As pointed out in  [32, 20], a reliable and well-calibrated model should be uncertain in its predictions when being inaccurate, and be confident for the opposite case. **C**: 4) Calibrated uncertainty penalty.
CBA
CBA
CBA
CBA
Selection 1
<|MaskedSetence|> As shown in Table 1, the enhanced EPR achieves the best accuracy over the HJB loss alone and normalizing flow approaches in terms of the rRMSE and rMAE metric. From Fig. 3, we can also find that the enhanced EPR presents the overall landscape more consistent with the simulated samples and the true/reference solution than the other two methods. <|MaskedSetence|> <|MaskedSetence|> Compared with the enhanced EPR and HJB loss alone, the normalizing flow captures the high probability domain but does not explicitly take advantage of the information of the dynamics, thus making its performance the worst. .
**A**: The decomposition of the force also shows a better matching result. **B**: The learned potential by the HJB loss alone and normalizing flow tends to be rough and non-smooth near the edge of samples (Fig. 3b and c). **C**: • Accuracy.
CAB
ACB
CAB
CAB
Selection 4
<|MaskedSetence|> <|MaskedSetence|> Two systems provides overall protection (Primary Protection System or PPS) and secondary protection (SPS). The PPS uses microprocessor-based logic, while the SPS uses magnetic core logic (laddic) technology. The design requirement for the PPS was to provide reactor trip and engineered safety feature (ESF) actuation for all design faults. The design requirement for the SPS was to provide reactor trip and ESF actuation, in parallel with the PPS, for faults with a frequency in excess of about once in 1000 years. The reliability targets of the two systems were therefore set at 1 in 10 000 for the PPS and 1 in 100 000 for the SPS. <|MaskedSetence|>
**A**: Components Nuclear Electric plc constructed and operated the Sizewell B PWR. **B**: This could have been achieved by hardware, but the highest standards of software production available at that time and of demonstration of integrity had to be applied to provide software assurance of this.. **C**: 6.27.2.
CAB
CAB
CAB
BAC
Selection 2
Regarding the loss functions that play a decisive role in training the networks, Mean Squared Error (MSE) loss and Learned Perceptual Image Patch Similarity (LPIPS) loss are chosen to measure the distortion between the original images and the generated ones. <|MaskedSetence|> <|MaskedSetence|> The pre-trained VGG-net gives more attention to the structure and texture of the images and does well in telling such differences between images. <|MaskedSetence|> 2, the complete process of the base system is as follows. .
**A**: MSE loss measures the difference per pixel and shows their distance in the high-dimensional space, which helps maintain the similarity. **B**: LPIPS loss helps fill this gap and makes the generated images closer to the original ones in visual. According to all the above introductions and the structure shown in Fig. **C**: Meanwhile, LPIPS loss proposed in[22] is calculated through a VGG-net that has been trained previously.
ACB
ACB
CBA
ACB
Selection 2
<|MaskedSetence|> <|MaskedSetence|> Two localization methods are used by different AGVs in the testbed: marker/track-based and Simultaneous Localization and Mapping (SLAM)-based. In the former, the AGVs follow a track on the floor with an onboard camera. <|MaskedSetence|> Between RFID tags, the AGVs estimate their position via odometry, i.e., based on wheel rotations, in combination with dead reckoning. Since the AGVs remain on track, the transversal error is below a few mm, while the longitudinal error was in the order of few cm. In SLAM-based localization, the AGVs use a laser scanner to detect and estimate the distance to landmarks, which are defined in the map of the AGV. This method achieves a position accuracy in the order of few cm in our testbed. .
**A**: II-A2 Localization Precise position of the communicating devices is required to link the environmental conditions with the measured data. **B**: Additionally, Radio-frequency Identification (RFID) tags are placed on the track to provide the exact position information when the AGVs pass over. **C**: For that purpose, the position information provided by the AGVs was recorded during the measurements.
ACB
ACB
BCA
ACB
Selection 2
Noncontact vitals monitoring using light-wave sensing may prove superior to existing technologies because of the safe, ubiquitous and harmless nature of incoherent light as well as the absence of privacy issues that are common with camera-based approaches. Noncontact vitals monitoring using visible light has already been performed as a proof of concept that showed >>>94% accuracy in breathing rate measurement [27]. But visible light can be troublesome to subjects in dark environments, particularly during sleep. Therefore, more discreet alternatives, such as infrared (IR) light, are preferred, as IR light is not visible to the naked human eye. This study introduces a novel system model for human respiration monitoring and anomaly detection using IR-based light-wave sensing technology. We collected human-like respiration data in a controlled environment with high precision and repeatability. <|MaskedSetence|> Therefore, the main contributions of this project are the development of the IR-based light-wave sensing system for noncontact respiration monitoring and the use of machine learning to smartly detect anomalous breathing and discard faulty data. The remainder of this manuscript is organized as follows. Section II includes an overview of related works from the literature on breathing anomaly detection using various sensing technologies and machine learning. Section III describes various human breathing patterns from the literature to be used as breathing classes for anomaly detection. <|MaskedSetence|> <|MaskedSetence|> Next, Section VI describes the handcrafted features used and their extraction process. Data classification process using the chosen machine learning algorithms are included in Section VII and the results along with their interpretations are discussed in Section VIII. Finally, Section IX presents the conclusions drawn from the whole effort and forecasts future research directions..
**A**: To identify anomalous breathing and faulty data, we applied machine learning algorithms to the handcrafted features extracted from the collected data. **B**: The details of hardware components, data collection and initial data processing are depicted in Section V. **C**: Section IV presents the system model, relevant theory and lock-in detection process used in this study.
ACB
BAC
ACB
ACB
Selection 3
In PSIs, the main private information about the clients which is revealed to a result recipient is the private set elements that the clients have in common. Thus, honest clients must receive a reward proportionate to the intersection cardinality, from a buyer. To receive the reward, the clients need to reach a consensus on the intersection cardinality. The naive way to do that is to let every client find the intersection and declare it to the smart contract. <|MaskedSetence|> Nevertheless, the honest majority assumption is strong in the context of multi-party PSI. Moreover, this approach requires all clients to extract the intersection, which would increase the overall costs. <|MaskedSetence|> This task could also be conducted by a single entity, such as the dealer; but this approach would introduce a single point of failure and all clients have to depend on this entity. To address these challenges, we allow any two clients to become extractors. <|MaskedSetence|> It is paid by the contract if the contract concludes that it is honest. This allows us to avoid (i) the honest majority assumption, (ii) requiring all clients to find the intersection, and (iii) relying on a single trusted/semi-honest party to complete the task. .
**A**: Each of them finds and sends to the contract the (encrypted) elements in the intersection. **B**: Under the assumption that the majority of clients are honest, then the smart contract can reward the honest result recipient (from the buyer’s deposit). **C**: Some clients may not even be interested in or available to do so.
BCA
BCA
BCA
BCA
Selection 4
The signal-to-noise level of the images can be estimated in different ways. We investigated different definitions and used the method that was the most compatible with a visual inspection of the images. <|MaskedSetence|> It assumes a noise with normal distribution and uses wavelets to identify the most likely standard deviation of the noise component. <|MaskedSetence|> <|MaskedSetence|> The total length of vessels is calculated as the sum of the arc-lengths of all vessel segments..
**A**: To prevent the method from capturing vessel variation, only the background of the image was used for the estimation. Blood vessel density is defined as the total length of blood vessels in an image divided by the image area. **B**: The method proposed in Donoho and Johnstone (1994) was used. **C**: To do this, we first apply a skeletonization algorithm to extract the medial lines of the vessels Palàgyi and Kuba (1998).
BAC
BAC
BAC
CBA
Selection 2
<|MaskedSetence|> <|MaskedSetence|> A more generalizable model is that which can achieve a loss as low as during training in more general settings, for example, when the equivalence between probability distributions is relaxed, or when the training and test data have different support. We note that SLT offers no performance guarantees when the statistical properties of the test data deviate from those of the training data. Furthermore, when the support of the test data is not equivalent to the support of the training data, UAT is no longer valid, since it is built upon an assumption that an approximated function has a compact support 444Some UAT results cover density in non-compact domains, e.g. [16]. Nonetheless, the authors proceeded with the assumption that a target function maps to zero outside of a given support.. It is worth noting that the last condition is typically absent in traditional machine learning approaches, as an input standardization step ensures that the model never receives inputs outside the range of values it was trained on [64]. <|MaskedSetence|>
**A**: However, in the context of dynamical systems, standardization would modify the dynamics’ outcomes and break connections to physical reality, as distinct inputs would be non-injectively mapped to the same standardized values. . **B**: The next two tasks, prediction and forecasting, relate to model’s increasing generalizability capacity. Prediction of dynamics considers how well a model performs “out-of-sample”, i.e. when it is tested on a data which is not part of the training set. **C**: For example, in standard SLT, one considers model’s ability to extrapolate to unseen input datapoints which are sampled from the same probability distribution function as the training data.
ACB
BCA
BCA
BCA
Selection 2
A bridge (cut edge) in a graph is an edge whose deletion increases the number of connected components. Similarly, a cut vertex is a vertex whose deletion (along with its edges) increases the number of connected components. A biconnected graph is a connected graph with no cut vertex. Also, a biconnected component (block) of a graph is a maximal biconnected subgraph of the original graph. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The bridge-block tree of a graph is a tree obtained by contracting the 2-edge-connected components; note that the edge set of a bridge-block tree correspond to the bridges in the original graph. We use the following classic theorem of König [21] that the size of the minimum vertex cover is equal to the size of maximum matching in bipartite graphs. Namely:.
**A**: A 2-edge-connected component of a graph is maximal 2-edge-connected subgraph of the original graph. **B**: A non-trivial biconnected component is a block that is not a bridge. **C**: We say a graph is 2-edge-connected if there is no bridge in the graph.
CBA
BCA
BCA
BCA
Selection 3
<|MaskedSetence|> We use MLQA Lewis et al. (2020) and XQuAD Artetxe et al. <|MaskedSetence|> We use XNLI Conneau et al. (2018) for natural language inference (training on MNLI, Williams et al., 2018), and we use XCOPA Ponti et al. (2020) for evaluating causal commonsense reasoning (training on COPA, Roemmele et al., 2011). The data statistics and language codes are summarized in Appendix C. We experiment in the zero-shot setting with English-only task data for training and validation. Hyperparameters. We perform a hyperparameter search with the learning rates of [[[[1111e-4444, 2222e-4444, 5555e-4444, 8888e-4444]]]] for our main experiments on all datasets except COPA. <|MaskedSetence|> See Appendix B for detailed hyperparameters per task per model. For scheduled unfreezing experiments, we search for the hyperparameter k𝑘kitalic_k in the following range [25, 50, 100, 800, 1000]. The reported results are averaged across 4 runs on A100 or V100 GPUs..
**A**: (2020) for question answering (SQuAD, Rajpurkar et al., 2016 for training). **B**: We conduct experiments on a diverse set of tasks and target languages. **C**: For COPA, we found a smaller learning rate (1111e-5555) is better for scheduled unfreezing methods.
BAC
ACB
BAC
BAC
Selection 1
Moreover, we compare the performance of other representative language models with the BERT-base model (Table 4). In general, the BERT-base model outperforms other language models. TF-IDF has a slightly higher recall score and a much lower precision score, suggesting poor performance. <|MaskedSetence|> <|MaskedSetence|> 2021; Chen et al. 2021). <|MaskedSetence|>
**A**: The superiority of the BERT-base model over bag-of-words, TF-IDF, and word2vec is consistent with its performance in other classification tasks (Devlin et al. **B**: 2019; Zhang et al. **C**: The preprocessing procedures, implementation details, and hyperparameter settings are discussed in the Appendix. .
ABC
ABC
CBA
ABC
Selection 1
<|MaskedSetence|> Seastar (wuSeastarVertexcentricProgramming2021, ) proposes a vertex-centric compiler stack to generate performant kernels throughout the training and/or inference of the model. Graphiler (xieGraphilerCompilerGraph, ) proposes to program the message passing data flow graph and devises several TorchScript transforms to emit highly optimized inference code. <|MaskedSetence|> These prior arts 1) expose PyTorch tensors as operands of all operations to users and 2) replicate weight to unleash parallelism due to a lack of support for flexible data access schemes and/or code generation. Thus, they suffer more or less from memory inefficiency and performance degradation. Although the general concept of multi-level IR is not new, this work proposes new optimizations appropriate for each level and effective in reducing data movement and code bloat in the current state of practice: Linear operator reordering and compact materialization are two key and novel features to capture and eliminate repetitive computation across edge types. <|MaskedSetence|>
**A**: GNN end-to-end compilers. **B**: Similarly, HGL (guiHGLAcceleratingHeterogeneous, ) is an RGNN compiler. **C**: Section 3.7 discussed the generalizability of this work. .
CAB
ABC
ABC
ABC
Selection 2
<|MaskedSetence|> In particular, [23] exploits the peculiar structure of optimal centralized precoding in deterministic channels, its relation to a certain Rayleigh quotient, and a series of properties from the theory of semidefinite programming and quadratic forms. Unfortunately, these arguments do not seem applicable to our setup, which covers distributed precoding and random channels. Specifically, equations similar to [23, Eq. <|MaskedSetence|> (25)], and [23, Eq. (29)] seem difficult to derive. To address this limitation, we follow a different path. <|MaskedSetence|>
**A**: (20)], [23, Eq. **B**: Remark 2. Our derivation differs significantly from the derivation of the related result in [23]. **C**: We replace the above arguments by a variation of well-known uplink-downlink duality results under a sum power constraint, reviewed, e.g., in [33, 19]..
BAC
BCA
BAC
BAC
Selection 1
Fully parametric methods provide an estimate of the full hazard function. <|MaskedSetence|> This method extends to settings where the covariates are measured multiple times (i.e. time-series) (Nagpal et al., 2021a). However, the initial reports did not assess performance when time-varying effects of baseline measurements are present (Nagpal et al., 2021b) (Nagpal et al., 2021a). clinical datasets often show significant time-varying effects that are relevant to disease-specific outcomes (Coradini et al., 2000)(Gao et al., 2006) (Na et al., 2020) (Zhu et al., 2022) (Salmon and Melendez-Torres, 2023). Not limited to these diseases, we expect all models that predict a survival outcome may gain improved performance by accounting for non-proportional hazards. <|MaskedSetence|> CBNN accounts for the time-varying effects by design. For all the discrete-time frameworks and architectures, a selection of smaller intervals of time or more survival times of interest may result in a few or no events per interval, which potentially leads to instability in the optimization function (Bhatnagar* et al., 2022). In contrast, selecting larger intervals may mask nonlinear, time-varying effects in the hazard function (Bhatnagar* et al., 2022). CBNNs can model time-varying interactions and a complex baseline hazard function without the trade-off between long or short interval lengths. CBNNs also estimate a full hazard function for all of follow-up time, rather than discrete intervals of follow-up-time. <|MaskedSetence|>
**A**: Compared to fully parametric approaches like DSM, CBNN explicitly incorporates time into the model, providing a simple way to account for time-varying effects that improve survival prediction.. **B**: For example, Deep Survival Machines (DSM), uses neural networks with a mixture of distributions to fit a flexible baseline hazard (Nagpal et al., 2021b). **C**: Cox partial log-likelihood based models assume the effects are time invariant (Cox, 1972).
BCA
BCA
ACB
BCA
Selection 2
Figure 9: Examples from the TDSTF test results from the first trial of TDSTF. HR predictions in subplots (a)-(c) are in beats per minute (bpm), while SBP and DBP predictions in subplots (d)-(i) are in millimeters of mercury (mmHg). The horizontal axis represents the relative time in minutes for each ICU stay. <|MaskedSetence|> Forecasts based on all 129129129129 features show high accuracy, even without the conditional data as is shown in (c), (f), and (i). The 95%percent9595\%95 % confidence intervals are shown as the areas between top and bottom bars of the violin histograms, with wider areas corresponding to more confidence and the middle bars representing the median values of the generated data points. The complexity of TDSTF was compared with CSDI using PyTorch-OpCounter 111https://github.com/Lyken17/pytorch-OpCounter. The comparison considered the number of parameters and the Multiply-Accumulate (MAC) of the models. The number of parameters determines the model complexity, while the MAC number refers to the computational costs. <|MaskedSetence|> The training time for CSDI was 12121212 hours, while for TDSTF, it was only 0.50.50.50.5 hours. The inference per sample consumed more than 55555555 billion MACs in CSDI and 0.790.790.790.79 million in TDSTF. <|MaskedSetence|> The fact that TDSTF is more complex than CSDI but consumes much less computation verifies our assumption on the CSDI overhead due to its large amount of missingness input..
**A**: Consequently, 14.91314.91314.91314.913 and 0.8650.8650.8650.865 seconds were required for inference per sample in CSDI and TDSTF, respectively, and TDSTF was more than 17171717 times faster. **B**: The red dots indicate target feature values, and the blue dots indicate conditional values of the same target feature. **C**: The results show that CSDI had 0.280.280.280.28 million parameters, while TDSTF had 0.560.560.560.56 million parameters.
BCA
BCA
BCA
BAC
Selection 1
Many non-linguistic cognitive capabilities can be substantially enhanced by language input. <|MaskedSetence|> <|MaskedSetence|> Coupled with the fact that language inputs contain vast quantities of information about the world, and that language is both a crucial data source and representational substrate for much of people’s world knowledge, this evidence suggests that, in principle, a model trained exclusively on language input could acquire much of functional linguistic competence. Thus, we do not argue that functional linguistic competence is out of reach for language-based models; our main goals are (1) to highlight the conceptual distinction between formal and functional linguistic competence—which in the human brain draw on separate neural circuits, and (2) to demonstrate the gulf between LLMs’ formal and functional linguistic abilities. These facts lead to a speculation that, like the human brain, models that can master language use would also require or benefit from separate mechanisms for formal and functional competence. <|MaskedSetence|>
**A**: In humans, this relationship is particularly salient during development: babies learn new conceptual categories more easily when they are accompanied by linguistic labels [192], and children with delayed language access have delayed social reasoning abilities [193]. **B**: Even in adulthood, knowledge of specific number words predicts the ability to conceptually represent exact numbers [194]. **C**: We discuss this idea next..
ACB
ABC
ABC
ABC
Selection 4
Our study confirmed that privacy concerns weigh heavy on the minds of smarthome device owners, and may impede adoption of co-monitoring applications. This is consistent with studies that investigated the sharing of smart devices with other people and identified privacy and security challenges  (Brush, 2012; Jang et al., 2017) as factors that influence the decision of whether or not to share a device. Similar privacy concerns have been found in prior work on device sharing  (Zeng et al., 2017; Tabassum et al., 2020), where participants were only willing to share devices with those they most trusted or when privacy intrusions were less likely, such as when they were not at home. Our participants expressed similar thoughts. <|MaskedSetence|> These results further demonstrated that addressing privacy concerns, such as access by less trusted community members, may allow users to expand their use and sharing of smarthome devices in ways that provide desired outcomes. However, some of the solutions that participants designed to mitigate their concerns, or improve the use of co-monitoring, may introduce their own privacy issues. For example, a co-monitoring system that shares devices when one is not home would need some way to determine that, either through location or other sensing. <|MaskedSetence|> Some participants also suggested a way to check which of their contacts is available or nearby in the event of an emergency. <|MaskedSetence|> In addition to the system itself now collecting location, the potential to share that location with other people, even if only in rare situations, can have unintended consequences, allowing friends and family to determine that someone was not where expected  (Barkhuus and Dey, 2003). However, location-tracking services have a chance of success in such a system if users are given a simple option to turn off location tracking when it is not needed  (Barkhuus and Dey, 2003). There is a delicate balance between enabling goals such as “safety” and “convenience” without crossing the boundary of making smarthome device owners feel a system is “privacy intrusive”. This balance is difficult to meet, especially given the fact that smarthome device owners may share their devices with multiple people, each with different values and roles (i.e, family, or friends). .
**A**: Again, this requires data collection from the contacts themselves, which may impede their desires to be involved in such a system. **B**: Thus, users are likely to be more willing for their most trusted friends and family to be emergency contacts for co-monitoring, and less likely to share device access with other useful community members, such as nearby neighbors who may have easier physical access to one’s home. **C**: This then requires additional data collection or inference.
BCA
BCA
BCA
BCA
Selection 2
In Figure 13, SHAP values for a specific prediction made by a machine learning model are displayed using the sample with index zero. <|MaskedSetence|> The impact of each feature on the model’s final output is depicted graphically, with the base value being the model’s average output over the training dataset and the final value representing the prediction obtained from the sample of interest. According to this figure, it appears that the "cycle" feature has a significant impact on the model’s prediction of RUL (remaining useful life). This is confirmed and explained by all of the models in the figure. <|MaskedSetence|> <|MaskedSetence|>
**A**: The horizontal bar plot labeled "important scores" shows the Shapley value for each feature. **B**: This is consistent with the findings in Figures 10 and 12. . **C**: Additionally, it can be seen that ReLU-DNN and EBM have better explanatory power than FIGS and tree algorithms.
ACB
ACB
ACB
ABC
Selection 1
<|MaskedSetence|> Figure 2 shows the relationship between conventional simulation and surrogate modeling. The constructed ML model returns immediate predictions for the input variables. Therefore, it can satisfy DT’s requirements. <|MaskedSetence|> The surrogate model can return predictions immediately when the input variables are given. <|MaskedSetence|>
**A**: Although there are several supervised ML methods (NN, PINN, MFNN), ONets could be a new option when building this surrogate model. Figure 2: Concept of surrogate modeling method. **B**: The demand for conventional simulations has not disappeared to prepare the training data for ML modeling. . **C**: In this approach, a solver predicts the system state under various input variable conditions (initial conditions, boundary conditions, material properties, etc.) in advance and builds an ML model using these as training data.
CAB
BCA
CAB
CAB
Selection 4
<|MaskedSetence|> Early works [9, 13, 18, 26, 47] use offline features extracted by expert models for modal fusion. Since the emergence of the CLIP [38], [31, 37] transfer CLIP to VTR task. <|MaskedSetence|> Afterward, using CLIP for the video-text retrieval task became a new paradigm. [10] uses text features as query vectors and applies the attention mechanism to image features. [44] designs a fine-grained token-wise interaction to calculate the similarity score. [34] designs a hierarchical aggregation mechanism of features. <|MaskedSetence|> However, all these works fine-tune the entire parameter set of CLIP, thus incurring high storage overhead. We focus on the parameter-efficient learning of VTR. .
**A**: 2.3 Video Text Retrieval [46, 3, 25, 2, 41, 51] are most widely used datasets in video-text retrieval (VTR). **B**: [32] designs a multi-grained interaction mechanism. **C**: They show CLIP significantly outperformed the previous models.
BAC
ACB
ACB
ACB
Selection 2
<|MaskedSetence|> In this case, the model is estimated from data, and thus the modeling error is inevitable. Our suboptimality analysis can incorporate this modeling error to provide performance guarantees for these controllers. Related work: When the model is exact, the suboptimality analysis of RHC controllers, with constraints or economic cost, has been studied extensively in [4, 5, 6] and references therein. <|MaskedSetence|> <|MaskedSetence|> Other relevant works can be found in the performance analysis of learning-based RHC [12, 13, 14], where the controller actively explores the state space of the unknown system and the model is recursively updated. There, a control performance metric called regret is concerned, which measures the accumulative performance difference over a finite time window between the controller and the ideal optimal controller. The impact of the modeling error has been investigated in the above analysis; however, the effect of the prediction horizon and the terminal value function on the control performance is not considered [14, 13, 12]..
**A**: The suboptimality analysis of RHC for linear systems with a structured parametric uncertainty is considered in [11]; however, the impact of the approximation in the terminal value function is not investigated. **B**: However, performance analysis in a setting where the system model is uncertain or unknown is rare. **C**: Moreover, we demonstrate an application of our analysis in the performance analysis of learning-based RHC controllers.
BCA
CBA
CBA
CBA
Selection 2
<|MaskedSetence|> 3DRNext denotes 3D ResNext [71]. <|MaskedSetence|> average-FLOPs represents the averaged FLOPs needed to process a single face crop. † denotes our estimates based on their visual encoders. Most methods incur higher costs by extracting features for each frame through stacking multiple adjacent frames (i.e., 11 in SPELL). <|MaskedSetence|>
**A**: Table 1: Comparison with SOTAs on AVA-ActiveSpeaker. **B**: R denotes 2D ResNet [26]. **C**: LoCoNet achieves the highest mAP with modest FLOPs. .
ABC
ABC
ABC
ABC
Selection 3
II.3 Measuring 1/f1𝑓1/f1 / italic_f noise In order to measure the exponent of the power spectrum in the LSTM cell, temporal sequences of the specific activations have to be obtained. <|MaskedSetence|> <|MaskedSetence|> The code for this is available at 333https://github.com/NicholasCJL/imdb-LSTM/blob/master/get_LSTM_internal_vectorized.py. <|MaskedSetence|> (1994) is then used for analysis of the time series to obtain the scaling exponent of the series. In this work, we use a modified version of the algorithm, the unbiased DFA Yuan et al. (2018) in order to account for the relatively short time series due to the length 500 reviews..
**A**: To obtain the internal activations of the Keras LSTM cells, the cell was recreated in vanilla Python with NumPy Harris et al. **B**: (2020). **C**: Detrended Fluctuation Analysis (DFA) Peng et al.
ABC
BCA
ABC
ABC
Selection 4
We use normalized mutual information (NMI) as the clustering evaluation metric. <|MaskedSetence|> Table IV and Table V summarize the clustering results using, respectively, spectral clustering and k𝑘kitalic_k-medoids. We can summarize a few observations: 1) the clustering performances in terms of two different clustering methods roughly remain consistent; 2) there is no obvious winner for univariate time series, all methods can achieve competitive performance; this makes sense, as DTW, MSM, TWED and TCK are all established methods; 3) our conditional CS divergence has obvious performance gain for multivariate time series; it is also generalizable to Traffic and UCLA, in which the dimension is significantly larger than the length; 4) the performance of our conditional CS divergence is stable in the sense that our measure does not have failing case; by contrast, DTW get very low NMI values in Robert failure LP1-LP5, whereas TCK completely fails in Traffic and UCLA. TABLE IV: Clustering performance comparison (by spectral clustering) in terms of normalized mutual information (NMI). <|MaskedSetence|> <|MaskedSetence|>
**A**: Please refer to [68] for detail definitions of NMI. **B**: The best performance is in bold; the second best performance is underlined.. **C**: “-” indicates the corresponding measures cannot be extended to multivariate time series or fail to obtain meaningful results.
BCA
ACB
ACB
ACB
Selection 2
Our main contribution lies in a rigorous error analysis as well as complexity analysis of our algorithm. To that end, we first introduce quantum circuits that can perform arithmetic operations on two complement’s numbers representing signed dyadic rational numbers, together with its complexity analysis. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Moreover, we show that for payoff functions which are bounded, our algorithm indeed has a speed-up compared to classical Monte Carlo methods. To the best of our knowledge, this is the first work in the literature which provides a rigorous mathematical error and complexity analysis for a quantum Monte Carlo algorithm which approximately solves high-dimensional PDEs. We refer to Remark 2.22 for a detailed discussion of the complexity analysis. .
**A**: This allows us to provide a rigorous error and complexity analysis when uploading first a truncated and discretized approximation of the multivariate log-normal distribution and then uploading an approximation of the CPWA payoff function in rotated form, where the approximation consists of truncation as well as the rounding of the coefficients of the CPWA payoff function. **B**: In particular, we prove that the computational complexity of our algorithm only grows polynomially in the space dimension d𝑑ditalic_d of the Black-Scholes PDE and in the (reciprocal of the) accuracy level ε𝜀\varepsilonitalic_ε. **C**: This together with a rigorous error and complexity analysis when applying the modified iterative quantum amplitude estimation algorithm [fukuzawa2022modified] allows us to control the output error of our algorithm to be bounded by the pre-specified accuracy level ε∈(0,1)𝜀01\varepsilon\in(0,1)italic_ε ∈ ( 0 , 1 ), while bounding its computational complexity; we refer to Theorem 1 for the precise statement of our main result.
ACB
ACB
ACB
CBA
Selection 1
In this paper, we present the first active learning tool for fine-grained 3D part labeling. Given a set of 3D shapes segmented into fine parts, our labeling tool assigns one of the predefined labels to each part. These input parts are deemed atomic (i.e., indivisible); they can be as low-level as mesh triangles and as high-level as results obtained by an unlabelled semantic segmentation, or mid-level components from an over-segmentation. <|MaskedSetence|> <|MaskedSetence|> To achieve full accuracy, two criteria must be met: (a) the testing human labeler’s judgement of what is the correct labeling agrees with the GT; (b) the labeler does not make any error, e.g., due to carelessness or time pressure. We regard both types of violations as “human errors.” One may also refer to the former as an inconsistency between human labeling, which is likely to occur near fuzzy part boundaries.. As shown in Fig. 2, our interactive labeling tool iteratively verifies or modifies part labels predicted by a deep neural network, with human feedback continually improving the network prediction. Specifically, our system consists of three modules for label proposal, verification, and modification. Label proposals are produced by dynamic graph CNN (DGCNN) [37], with the resulting label probabilities dictating whether to pass the predicted labels to the verification or modification module. <|MaskedSetence|> The iteration terminates once all part labels have passed human verification. .
**A**: Compared to conventional learned models with full autonomy, such a human-in-the-loop approach provides a viable option to achieve close to error-free part labeling on test sets, barring human errors111During testing, accuracy is measured against GT labeling which has been produced by humans. **B**: In general, active learning iterates between automated labeling and human input for label rectification [44]. **C**: Both human-verified and corrected labels are passed back to the label proposal network to fine-tune it.
BAC
BAC
ABC
BAC
Selection 4
<|MaskedSetence|> Each plot presents the SHAP values of Minerva’s features for 100 randomly sampled data points. For ransomware, we generate separate SHAP plots for each type, allowing us to highlight the distinctive contribution of features to classification, based on the diverse behavior exhibited by each ransomware type. Figures 6(a), 6(b), and 6(c) display the individual contributions of the Minerva features to the prediction of benign, traditional ransomware and multiprocess ransomware classes for the one-second window model. <|MaskedSetence|> This finding is expected, as benign applications typically perform partial writes to files, while ransomware aims to fully rewrite user files as part of its malicious activity. A similar pattern emerges with the read ratio feature, where benign applications often read only portions of files, while ransomware tend to read files fully to encrypt them. Figure 6(b), however, reveals exceptions to this trend, indicating that some traditional ransomware variants also exhibit partial file read and write behaviors. We hypothesize that these ransomware variants prioritize speed over completeness of encryption, sacrificing full file encryption to achieve higher overall encryption speed. Other important features are operation number and process number. Figure 6 underscores the descriptive nature of these features, as ransomware typically exhibits a notably higher number of operations and number of processes acting on files during the encryption activity, as previously highlighted in Figures 2(b) and 2(c). This characteristic is especially prominent for multiprocess ransomware, where multiple processes collaborate in the encryption process to mimic the per-process behavior of benign processes. Finally, the read and write entropy features play a crucial role in discriminating between benign and ransomware activity, with the write entropy feature having a particularly strong influence on the model’s classification for ransomware. <|MaskedSetence|> This outcome is expected, as certain benign applications display read/write entropy profiles akin to ransomware, as previously outlined in Section 4.2. .
**A**: An immediate observation is the significance of the write ratio feature as a distinguisher between benign and ransomware instances. **B**: Figure 6 presents the SHAP plots for the 1⁢s⁢e⁢c1𝑠𝑒𝑐1sec1 italic_s italic_e italic_c window Minerva classifier, highlighting the importance of each selected feature to the final classification and how feature importance changes for different types of ransomware. **C**: Conversely, the impact of these features on benign activity is ambiguous.
CBA
BAC
BAC
BAC
Selection 4
The rest of this paper is organized as follows. <|MaskedSetence|> <|MaskedSetence|> In Section 3 we present a solution for the open question from [3], showing hardness of the 0-Or-2-Neighbors Best Response Pattern, and expanding the result to a larger sub-class of patterns that begin with 1,0,1. <|MaskedSetence|> The outline of this paper is also depicted333Some patterns which start with 1,0 were solved in [3], though for simplicity we omit them from Figure 1. in Figure 1. .
**A**: In Section 4 we show hardness of all patterns beginning with 1,0,0 (where we also have a slightly more subtle division into sub-classes), and in Section 5 we show hardness of all patterns beginning with 1,0,1 that were not covered in Section 3, thus completing the characterization for all finite patterns. **B**: We then set out to show hardness of all remaining patterns, dividing them into classes. **C**: In Section 2 we introduce the formal model and some relevant definitions.
CBA
CBA
CBA
ABC
Selection 2
<|MaskedSetence|> Our main technical contribution is to provide an upper bound estimate for this frontier by solving a sequence of optimization problems. <|MaskedSetence|> <|MaskedSetence|> We also prove convergence guarantees for our algorithm and demonstrate how it can be used to benchmark existing fairness interventions. We quantify epistemic discrimination by comparing a classifier’s performance with the information-theoretic optimal given by the fairness Pareto frontier. Our experiments indicate that given sufficient data, state-of-the-art (SOTA) group fairness interventions are effective at reducing epistemic discrimination as their gap to the information-theoretic limit is small (see Figure 1 and 2). Consequently, there are diminishing returns in benchmarking new fairness interventions on standard (overused) tabular datasets (e.g., UCI Adult and ProPublica COMPAS datasets)..
**A**: At first, computing the fairness Pareto frontier can appear to be an intractable problem since it requires searching over all possible classifiers—even if the data distribution is known exactly. **B**: Here, we apply these results to develop an algorithm that iteratively refines the achievable fairness Pareto frontier. **C**: The proof technique is based on Blackwell’s seminal results (Blackwell,, 1953), which proposed the notion of comparisons of statistical experiments and inspired a line of works introducing alternative comparison criteria (see e.g., Shannon,, 1958; Cam,, 1964; Torgersen,, 1991; Cohen et al.,, 1998; Raginsky,, 2011).
ACB
ACB
ACB
CAB
Selection 3
Many methods have been proposed for time-to-event prediction in survival analysis. The most conventional and prevalent method is a semi-parametric method called the Cox PH model [2]. <|MaskedSetence|> <|MaskedSetence|> Nevertheless, they suffer from the curse of dimensionality. <|MaskedSetence|> Particularly, a fully parametric method, referred to as deep survival machines (DSM) [12], has demonstrated competitive predicting performance compared with state-of-the-art methods. Nevertheless, DSM learns different base distributions for different instances, which makes its inner mechanism hard to interpret [13, 14]. .
**A**: Some nonparametric methods such as Kaplan-Meier [3], Nelson-Aalen [4], and Life-Table [5] are also widely used in survival analysis. **B**: Survival analysis also attracts the attention of the machine learning community and many machine learning methods [6, 7, 8, 9, 10, 11] have been developed to reveal the relationship between the features and the survival information. **C**: It assumes that the hazard rate for every instance is constant over time known as the proportional hazard (PH) assumption.
CAB
ACB
CAB
CAB
Selection 1
<|MaskedSetence|> <|MaskedSetence|> This shows the need for temporal models for real-world relation forecasting. In our work, we achieve this using a memory based temporal node representation learning technique as explained in Section 3.2. We can also observe that models that use attention-based temporal representation, HyperNodeTPP and DHyperNodeTPP, perform better than non-attentional models, HGBDHE and HGDHE, by comparing the performance in event type prediction tasks. This justifies using neighborhood features for temporal node representation learning. <|MaskedSetence|>
**A**: 2018). **B**: Additionally, we have discussed the limitations of this model in Appendix F. . **C**: Further, one can observe that the temporal models perform much better than the static model GAT (Veličković et al.
CAB
CAB
CAB
BCA
Selection 2
7.1 The Importance of Stability Stability guarantees are central in a variety of contexts, despite the fact that many widely-used practical algorithms are not stable (Xu et al.,, 2011). <|MaskedSetence|> <|MaskedSetence|> Stability is further relevant to differential privacy guarantees; assuming worst-case stability (often called “sensitivity” in the privacy literature) is a standard starting point for constructing differentially private algorithms (Dwork,, 2008). <|MaskedSetence|> We now discuss applications of algorithmic stability to generalization and conformal inference in greater detail. .
**A**: In the field of conformal prediction, distribution-free coverage guarantees rely upon the stability of the underlying estimators (e.g., Steinberger and Leeb,, 2016, 2023; Ndiaye,, 2022; Barber et al.,, 2021). **B**: For instance, Bousquet and Elisseeff, (2002) establish generalization bounds for stable learning algorithms, and Mukherjee et al., (2006) show that stability is necessary and sufficient for empirical risk minimization to be consistent; related works include (Poggio et al.,, 2004; Kutin and Niyogi,, 2002; Freund et al.,, 2004). **C**: Shalev-Shwartz et al., (2010) identify stability as a necessary and sufficient condition for learnability.
BCA
BCA
BCA
BCA
Selection 3
<|MaskedSetence|> To this end, we introduced the FM*{}^{*}start_FLOATSUPERSCRIPT * end_FLOATSUPERSCRIPT and max-ASW families. <|MaskedSetence|> <|MaskedSetence|> The derived conditions consist of direction optimality, separability, and injectivity. We then leveraged the theoretical results to develop the Slicing Adversarial Network (SAN), in which a generator and discriminator are trained with a modified GAN training scheme. Despite the ease of modifications and the generality, this model can impose direction optimality on the discriminator. Our experiments on synthetic and image datasets showed that SANs outperformed GANs in terms of both the sample quality and mode coverage. .
**A**: We then extended the result to a general GAN. **B**: By using a class of metrics that are included in both families, we derived the metrizable conditions for Wasserstein GAN. **C**: We have proposed a unique perspective on GANs to derive sufficient conditions for the discriminator to serve as a distance between the data and generator probability measures.
CBA
CBA
ACB
CBA
Selection 1
<|MaskedSetence|> al. of Deepmind) is an inference engine that generates hypotheses that generalise often. To achieve this, Evans formalised Kant’s philosophy to give the engine a “strong inductive bias”. The engine forms hypotheses from only very general assertions, meaning logical formulae which are universally quantified. <|MaskedSetence|> <|MaskedSetence|> Obviously this can work well, but only for the subset of possible tasks that the vocabulary is able to describe in this way (anything else will not be able to be represented as a universally quantified rule, and so will not be represented at all [27]). This illustrates how future research.
**A**: The tailoring of logical formulae to represent certain sequences amounts to a choice of 𝔳𝔳\mathfrak{v}fraktur_v, and the use of only universally quantified logical formulae ensures the resulting hypothesis is weak. **B**: That is possible because the engine uses language specifically tailored to efficiently represent the sort of sequences to which it is applied. Our results suggest a simpler and more general explanation of why the engine’s hypotheses generalise so well. **C**: 6.0.1 The Apperception Engine: The Apperception Engine [24, 25, 26] (Evans et.
CBA
ABC
CBA
CBA
Selection 4
To address the above-mentioned issue, we present an attribute-guided multi-level attention network (AG-MAN) to improve retrieval accuracy in fine-grained fashion similarity learning. <|MaskedSetence|> <|MaskedSetence|> Then, when fine-tuning the pre-trained CNN for extracting image features, we suggest incorporating a classification loss that groups images sharing the same attribute but differing in sub-classes into a common category. This can further alleviate the feature gap problem by perturbing for object-centric feature learning. Once improved image representations are attained, we introduce an improved attribute-guided attention module to derive more accurate attribute-specific representations. The proposed AG-MAN consistently outperforms existing attention networks over three datasets. <|MaskedSetence|>
**A**: The proposed AG-MAN can extract more discriminative image features. **B**: Specifically, we firstly enhance the pre-trained CNN backbone to increase the low-level features contained in image representations. **C**: In brief, this paper offers the following key contributions: .
ABC
CAB
ABC
ABC
Selection 4
<|MaskedSetence|> (2021a) use the features of infinitely-wide neural tangent kernel (NTK) from fully connected neural networks. Nguyen et al. <|MaskedSetence|> (2021a) by changing the infinite-width NTK from a fully connected network to a convolutional network. However, kernels formed out of convolutional and pooling layers introduce a significant computational burden, due to the necessity of keeping track of pixel-pixel correlations. To mitigate this problem, Nguyen et al. <|MaskedSetence|> So, we question which features (kernels) to use to produce the results better or comparable to the distilled data trained with the features of infinitely-wide ConvNet NTKs, while keeping the computational cost manageable to a single V100 GPU?.
**A**: Nguyen et al. **B**: (2021b) significantly improves previous results in Nguyen et al. **C**: (2021b) use hundreds of V100 GPUs working in parallel to make the optimization computationally feasible. As a result, using the infinitely-wide ConvNet NTK (ConvNet-NTK) becomes out of reach for researchers who do not have hundred V100 GPUs available at hand.
ABC
ABC
ABC
BCA
Selection 3
<|MaskedSetence|> In general, we can see that our method is more robust to false positive detections, while displaying more accurate true positive detections. We also show the cross-view detections on the CC and MLO mammograms from the same breast in Figure 9. Notice how BRAIxDet is robust to false positives, and at the same time precise at detecting the true positives from both views. <|MaskedSetence|> <|MaskedSetence|>
**A**: Second row shows the competing approach [5] that updates the teacher’s BN statistics with the EMA from the student’s BN statistics.. **B**: We believe that such cross-view accuracy is enabled in part by the cross-view analysis provided by BRAIxDet. Table 7: Comparison of different types of batch normalisation (BN) strategies for the student-teacher SSL stage. The first row shows the mAP and Recall @ 0.5 results using the usual approach, where the student updates its own BN statistics, but the teacher does not update the BN statistics from pre-training. **C**: We visualise the detection results by the most competitive methods and our proposed BRAIxDet model on ADMANI (Figure 7) and on CBIS-DDSM (Figure 8).
CBA
CBA
CBA
CBA
Selection 2
<|MaskedSetence|> <|MaskedSetence|> We also implement our own synthetic scanning procedure to produce MVS-like point clouds. We use synthetic scanning on existing large shape datasets to create training datasets with true surfaces and point clouds with realistic characteristics (see Fig. 5). Furthermore, we specifically study the generalization capabilities of methods by using train and test sets with different characteristics in terms of shape categories and acquisition defects. MVS benchmarks. The generation of point clouds from 2D information such as overlapping images is an integral problem of surface reconstruction. There exists a variety of benchmarks using data captured in a laboratory environment [20, 21] or in the wild [22, 23, 24]. <|MaskedSetence|>
**A**: These benchmarks often use low-quality image acquisitions as input, and use a higher-quality acquisition, e.g., from LiDAR scans, to produce precise and dense point clouds that serve as reference.. **B**: and their test shapes as they provide a realistic challenge for both learning-based and traditional algorithms. **C**: In our benchmark, we use the synthetic range scanning procedure of Berger et al.
CBA
CBA
BAC
CBA
Selection 1
We now recapitulate some key properties of our model and contrast this with related methods: (i) Since the depth of our model is obtained by combining multiple kernel machines, we are able to use simple kernel functions such as the RBF-kernel. (ii) We directly train the dual variables, which are the node representations themselves. Inversely, Chen, Jacob, and Mairal (2020) uses graph kernels and approximates them using the Nyström-method to work in a primal representation. Nevertheless, our modular framework easily allows to augment or replace the kernel function by a graph kernel, at any layer. (iii) The layerwise structure of our deep kernel machine yields an effective initialization scheme and levelwise objective function during finetuning, which can yield good conditioning of the training process. Other methods construct a deep feature map but train it in a shallow kernel machine setting (Du et al. 2019; Chen, Jacob, and Mairal 2020). (iv) Our method uses a 1-hop message passing scheme to learn the graph topology, whereas most state-of-the-art convolutional GNNs use higher order polynomial filters (He et al. 2021; He, Wei, and Wen 2022). (v) Our framework, is highly modular. One can for example easily extend it to directed graphs or categorical data by choosing appropriate aggregation functions and kernel functions (e.g., (Couto 2005)) respectively. <|MaskedSetence|> On the other hand, the last three terms yield a spectral clustering of a learned similarity graph, where each cluster is pushed towards the few labeled nodes. <|MaskedSetence|> <|MaskedSetence|> We believe that these characteristics of our model explain the performances in Table 2, Table 3, and Table 5..
**A**: Furthermore, we include an unsupervised validation metric in our model selection task. **B**: Because of this regularization on the hidden representations, and the fact that the read-out layer is based on an unsupervised core model (augmented with a supervised term for the few labels), our model achieves good generalization capabilities. **C**: We refer the reader to Table 1 for a qualitative comparison with some different models. On one hand, the end-to-end objective (12) maximizes the variance in each hidden node representation.
CAB
CBA
CBA
CBA
Selection 2
Weaker notions of universality Widdowson and Kurlin (2022) suggest a method for distinguishing almost every point clouds up to equivalence, similar to our result here on 1111-EWL. Similarly, efficient separation/universality can also be obtained for point clouds with distinct principal axes (Puny et al. <|MaskedSetence|> <|MaskedSetence|> 2022; Villar et al. 2021; Satorras, Hoogeboom, and Welling 2021). <|MaskedSetence|>
**A**: 2021; Kurlin 2024). **B**: All these results do not provide universality for all point clouds, with respect to the joint action of permutations and rigid motions. . **C**: Another setting in which universality is easier to obtain is when only rigid symmetries are considered and permutation symmetries are ignored (Wang et al.
ACB
ACB
ACB
CAB
Selection 1