text_with_holes
stringlengths
114
4.02k
text_candidates
stringlengths
58
3.84k
A
stringclasses
6 values
B
stringclasses
6 values
C
stringclasses
6 values
D
stringclasses
6 values
label
stringclasses
4 values
[DM20] showed that sparse parities are learnable on a two layers network if the input coordinates outside the support of the parity are uniformly sampled and the coordinates inside the support are correlated. <|MaskedSetence|> Curriculum Learning (CL) in the context of machine learning has been extensively analysed from the empirical point of view [BLCW09, WCZ21, SIRS22]. <|MaskedSetence|> In [SMS22] the authors propose an analytical model for CL for functions depending on a sparse set of relevant features. In their model, easy samples have low variance on the irrelevant features, while hard samples have large variance on the irrelevant features. In contrast, our model does not require knowledge of the target task to select easy examples. <|MaskedSetence|>
**A**: In [WCA18, WA20] the authors analyse curriculum learning strategies in convex models and show an improvement on the speed of convergence of SGD. In contrast, our work covers an intrinsically non-convex problem.. **B**: However, theoretical works on CL seem to be more scarce. **C**: To the best of our knowledge, none of these works propose a curriculum learning model to learn parities under the uniform distribution. Curriculum learning.
CBA
CBA
CBA
CBA
Selection 2
<|MaskedSetence|> <|MaskedSetence|> The first scenario is a common practice where researchers directly find the threshold on the labeled test set [26, 132, 133], which can lead to biased and unfair performance evaluations. In the second scenario, the threshold is selected based on a labeled validation set that yields the best F1 score. The selected threshold is then applied to the test set [53]. However, the first two cases may not align with real-world unsupervised applications where labeled data is unavailable. To tackle this limitation, the third scenario advocates for using an unlabeled validation set to select the threshold, which is then applied to the test set [80]. Note that there is no overlap between training, validation and test sets. The issue of the above threshold-dependent scenarios lies in their evaluation of the model’s performance at a single threshold, which may not capture the model’s behavior across the full range of possible thresholds. <|MaskedSetence|> This approach offers a more comprehensive assessment of the model’s performance by considering multiple thresholds, which allows for a deeper understanding of the trade-offs between Prec and Rec, facilitating better decision-making for a given application..
**A**: There are four scenarios for threshold finding. **B**: While PA%K can address the PA issue, selecting a decision threshold remains non-trivial in TSAD research. **C**: Hence, several methods in the domains of time-series signals, as well as social networks and videos [27, 113, 98, 99, 100, 101, 54] have adopted the fourth scenario, leveraging the threshold-independent metrics such as Area Under the Receiver-Operating Characteristic Curve (AUC) [134, 135], and Area Under the Precision-Recall Curve (APR) [136].
BAC
BCA
BAC
BAC
Selection 3
RQ1: Evolution of accusations Table 2 shows the nearest neighbor embeddings of the term bot over the years covered in our dataset. <|MaskedSetence|> <|MaskedSetence|> In the years leading up to 2016, the term bot is closely and firmly associated with terms representing the automated behavior often used in academia to conceptualize bots. These include software, script, or comments. <|MaskedSetence|> By studying inherently interpersonal accusation situations, we find empirical support for Haslam‘s theoretical argument that “dehumanization is an important phenomenon in interpersonal as well as intergroup contexts” and also “occurs outside the domains of violence and conflict” (Haslam 2006)..
**A**: As the number of accusations for the early years up to 2016 is significantly lower (see Figure 2), we aggregate the accusations made in those years to get a sufficient number of observations for training the embeddings. **B**: The color coding we applied to the nearest embeddings vectors displayed in Table 2 shows how the meaning of the bot accusations shifted over the years. **C**: In the following years, the wording of the bot accusations shifted away from such signs of automation in favor of terms used to insult the accused user. The use of derogatory terms such as moron, stupid, idiot, and shill implies that the accuser views the accused as less than human, often by questioning their mental capacity. This indicates that over time bot accusations became predominantly instances of “dehumanization” (Haslam and Loughnan 2014).
ABC
ABC
ABC
CAB
Selection 2
<|MaskedSetence|> First, we investigate how the neural network structure of the QNBM affects its classical simulatability, comparing the non-linear model to its ”linearized” version. <|MaskedSetence|> <|MaskedSetence|> Lastly, we provide the first demonstration of the model’s ability to generalize - i.e. learn an underlying ground truth probability distribution from a limited number of training samples. 3.1 Does the non-linearity make the learning model efficiently classically simulatable as a result of the deferred measurement principle?.
**A**: Next, we thoroughly assess the model’s learning capabilities by training a (5,0,6)506(5,0,6)( 5 , 0 , 6 ) network on three types of target probability distributions. **B**: To further distinguish the quantum model’s capabilities from those of classical generative architectures, we benchmark the learning performance against a classical RBM, which contains a similar network structure to the QNBM with a similar number of resources. **C**: In this section, we introduce a deeper investigation into the QNBM as a quantum generative model from multiple perspectives.
CAB
CAB
CAB
BCA
Selection 3
A big challenge is that ROMs provide a simplified and imperfect description of the dynamics, which negatively affects the performance of the state estimator. One potential solution is to improve the accuracy of the ROM through the inclusion of additional closure terms (Ahmed et al., 2021). <|MaskedSetence|> The RL-ROE is constructed from the ROM in an analogous way to a Kalman filter, with the crucial difference that the linear filter gain function, which takes in the current measurement data, is replaced by a nonlinear policy trained through reinforcement learning (RL). <|MaskedSetence|> Indeed, we show that in the limit of sparse measurements, the trained RL-ROE outperforms a Kalman filter designed using the same ROM and displays robust estimation performance across different dynamical regimes. <|MaskedSetence|>
**A**: In this paper, we leave the ROM untouched and instead propose a new design paradigm for the estimator itself, which we call a reinforcement-learning reduced-order estimator (RL-ROE). **B**: To our knowledge, the RL-ROE is the first application of RL to state estimation of parametric PDEs. . **C**: The flexibility of the nonlinear policy, parameterized by a neural network, enables the RL-ROE to compensate for errors of the ROM while still taking advantage of the imperfect knowledge of the dynamics.
ACB
CAB
ACB
ACB
Selection 3
The bottom of Fig. <|MaskedSetence|> <|MaskedSetence|> However, it uses two separate decoders. The feature map computed by the encoder is split along the feature dimension into two separate maps: one for shape features (blue) and one for content features (orange). Each feature map is processed by a different decoder. While the decoders share the same architecture, each has its own unique weights. The shape features are decoded into the inner nodes of the BSP trees, which represent the shape of the regions in the final segmentation, as shown in Fig. <|MaskedSetence|> The content features are decoded into the leaf nodes of the BSP trees, which represent each region’s class logits. Figure 3: A partitioning of a square region (right) defined by a BSP tree (center). Shape features are decoded into the parameters of the inner nodes (blue), which define lines (green) creating the partitioning. Content features are decoded into the parameters of the leaf nodes (orange), which are the class logits predicted for each partition. .
**A**: As other state-of-the-art models, it uses an encoder-decoder architecture. **B**: 2 shows BSPSegNet in comparison. **C**: 3.
BAC
ACB
BAC
BAC
Selection 4
We compare the model performance between our CECT and several SOTA models in table 4 and table 5, for the COVID-19 radiography dataset and the COVIDx CXR-3 dataset, respectively. For intuitive illustration, we categorize the SOTA approaches into pure and hybrid methods, in which pure methods are CNN- or transformer-based, and hybrid methods are based on their integration. <|MaskedSetence|> From table 4, we can find that CECT shows apparent performance leadership on most metrics. It reaches the highest ACC, NPV, SEN, and FOS of 98.1%, 98.6%, 96.1%, and 96.4%, respectively. <|MaskedSetence|> From the result shown in table 5, CECT demonstrates tremendous leadership on most evaluation metrics, reaching an ACC, NPV, PPV, SEN, and FOS of 97.2%, 96.1%, 98.5%, 96.0%, and 97.2%, respectively. The highest SPE is achieved by LeViT with 0.5% leadership compared with CECT. <|MaskedSetence|>
**A**: Regarding the PPV and SPE, the performance is also among the best though not as high as GhostNet. **B**: It can be found that CECT outperforms SOTA methods on all evaluation metrics to a different extent. **C**: The overall performance leadership of CECT shows the effectiveness of the proposed architecture and the outstanding performance leadership demonstrates the importance of capturing both multi-local and global features. .
BAC
BAC
BAC
BCA
Selection 2
<|MaskedSetence|> In cross-device FL, an attacker who controls multiple participating devices could in principle obtain some or all model checkpoints. But we argue there are at least two important cases where the “final-model-only” threat model is realistic. First, one can simulate DP-FedAvg on centrally collected data in the datacenter to provide a user-level DP guarantee. In this scenario, clients still entrust the server with their data, but intermediate states are ephemeral and only the final privatized model (whether via release of model parameters or black-box access) is made public. <|MaskedSetence|> <|MaskedSetence|> Therefore we believe the final-model-only threat model is realistic and important, and will be of increasing interest in coming years as TEEs become more widely used. .
**A**: Second, there is much recent interest in using Trusted Execution Environments (TEEs) for further data minimization. **B**: For example, using TEEs on server and client, a client could prove to the server that they are performing local training as intended without being able to access the model parameters [Mo et al., 2021]. **C**: Existing analytical bounds on ε𝜀\varepsilonitalic_ε for DP-SGD assume that all intermediate model updates can be observed by the adversary [Abadi et al., 2016].
CAB
CAB
ABC
CAB
Selection 1
This section introduces relevant background material. <|MaskedSetence|> In recognition of the philosophical nature of this topic we present arguments rather than mathematical proofs, and the paper should be understandable without delving too deeply into the math. While all relevant definitions are given here, context is provided by the papers in which these definitions originated, and in technical appendices available on GitHub [1]. To those more familiar with the agent environment paradigm, how exactly these definitions formalise cognition may seem unclear. <|MaskedSetence|> This is because it is a formalism of enactivism [13], which holds that cognition extends into and is enacted within the environment. What then constitutes the agent is unclear. In light of this, and in the absence of any need to define an agent absent an environment, why preserve the distinction? Subsequently, the agent and environment are merged to form a task [7], which may be understood as context specific manifestations of intent, or snapshots of what bears some resemblance to “Being-in-the-world” as described by Heidegger [14]. In simpler terms, this reduces cognition to a finite set of decision problems [7]. <|MaskedSetence|> Arguments as to why only finite sets are relevant are given elsewhere [15, p. 2]. .
**A**: The reader may wish to skip ahead to section 3333 and refer here as needed. **B**: One infers a model from past interactions, and then makes a decision based upon that model (akin to a supervised learner fitting a function to labelled data, then using that to generate labels for unlabelled data). **C**: Neither agent nor environment are defined.
ACB
ACB
ACB
ACB
Selection 3
Comparing K𝐾Kitalic_K (probability) measures requires the pairwise calculation of transport-based distances, which, despite the significant recent computational speed-ups, remains to be relatively expensive. To address this problem, [41] proposed the Linear Optimal Transport (LOT) framework, which linearizes the 2-Wasserstein distance utilizing its weak Riemannian structure. In short, the probability measures are embedded into the tangent space at a fixed reference measure (e.g., the measures’ Wasserstein barycenter) through a logarithmic map. The Euclidean distances between the embedded measures then approximate the 2-Wasserstein distance between the probability measures. The LOT framework is computationally attractive as it only requires the computation of one optimal transport problem per input measure, reducing the otherwise quadratic cost to linear. Moreover, the framework provides theoretical guarantees on convexifying certain sets of probability measures [34, 1], which is critical in supervised and unsupervised learning from sets of probability measures. The LOT embedding has recently found diverse applications, from comparing collider events in physics [9] and comparing medical images [5, 30] to permutation invariant pooling for comparing graphs [29] and point sets [35]. Many applications in ML involve comparing non-negative measures (often empirical measures) with varying total amounts of mass, e.g., domain adaptation [19]. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: Moreover, OT distances (or dissimilarity measures) are often not robust against outliers and noise, resulting in potentially high transportation costs for outliers. **B**: For instance, the optimal partial transport (OPT) problem [8, 20, 21],. **C**: Many recent publications have focused on variants of the OT problem that allow for comparing non-negative measures with unequal mass.
ACB
ACB
ABC
ACB
Selection 1
To address this inefficiency in learning to cooperate, it is essential for agents to focus their exploration on parts of the environment which require cooperation. To guide exploration towards states and actions that require cooperation, we follow the simple intuition that such states and actions result in varying rewards depending on the actions of other agents. In our example in Figure 1 (right), both agents receive varying rewards after selecting the pick-up action depending on whether the other agent also selects the pick-up action. Only if both agents select the pick-up action do they succeed and receive a positive reward, whereas if any agent does not select the pick-up action they do not receive such a reward. <|MaskedSetence|> Therefore, agents receive identical rewards when moving but receive varying rewards when attempting to pick-up, indicating the potential for cooperation. Following this intuition, we propose ensemble value functions for multi-agent exploration (EMAX), a general framework to seamlessly extend any value-based MARL algorithms by training ensembles of value functions for each agent. To incentivise agents to focus their exploration on state-action pairs that require cooperation across multiple agents, EMAX captures the variability of received rewards via the disagreement of value estimates across the ensemble of value functions. <|MaskedSetence|> <|MaskedSetence|>
**A**: The EMAX exploration policy can be thought of as an inductive bias for MARL towards exploring state-action pairs that require cooperation.. **B**: Using an upper-confidence bound (UCB) policy (Auer, 2002), EMAX follows actions with high value estimates and disagreement, corresponding to actions that are considered promising (as measured by the value estimates) and are likely to require cooperation (as measured by the disagreement in value estimates). **C**: In contrast, agents receive no rewards when selecting actions to move across the environment independent of the actions of other agents because there is no potential for rewards if any one agent does not attempt to pick-up the object.
CBA
BCA
CBA
CBA
Selection 4
2.5 Message Propagation Most GTs follow the global all-to-all attention of Vaswani et al. <|MaskedSetence|> <|MaskedSetence|> Other models alter the global attention mechanism to bias it explicitly, typically based on the input graph’s structural properties. Graphormer (Ying et al., 2021) incorporates shortest-path distances, representation of edges along a shortest path, and node degrees. Transformer-M (Luo et al., 2022a) follows Graphormer and adds kernelized 3D inter-atomic distances. GRPE (Park et al., 2022) considers multiplicative interactions of keys and queries with node and edge features instead of Graphormer’s additive bias and additionally augments output token values. SAN (Kreuzer et al., 2021) relies on positional encodings and only adds preferential bias to interactions along input-graph edges over long-distance virtual edges. GraphiT (Mialon et al., 2021) employs diffusion kernel bias, while SAT (Chen et al., 2022b) develops a GNN-based structure-aware attention kernel. EGT (Hussain et al., 2022) includes a mechanism akin to cross-attention to edge tokens to bias inter-node attention and update edge representations. <|MaskedSetence|> (2023) propose Graphormer-GD, a generalization of Graphormer, alongside new expressivity metrics based on graph biconnectivity for which Graphormer-GD attains the necessary expressivity. .
**A**: (2017) between all pairs of tokens. **B**: Finally, Zhang et al. **C**: In the initial GT (Dwivedi & Bresson, 2020) and TokenGT (Kim et al., 2022) this mechanism is unchanged, relying on token representations augmented with graph structural or positional information.
ABC
ACB
ACB
ACB
Selection 2
<|MaskedSetence|> (2021) provides no data for NAS-Bench-NLP, disabling us from using their results for calculations. Therefore, in Table 4, we give only values provided in the paper together with our epsilon metric (data for ficher is absent). We want to note that, unlike accuracy, perplexity used for language-related ML problems should be minimised. <|MaskedSetence|> (2021). Table 4: Zero-cost metrics performance evaluated on NAS-Bench-NLP search space, PTB dataset. <|MaskedSetence|>
**A**: Therefore, the signs of correlations with scoring metrics should be reversed, which is not the case for numbers given in Abdelfattah et al. **B**: Unfortunately, Abdelfattah et al. **C**: We highlight the best-performing metrics in bold..
BAC
ACB
BAC
BAC
Selection 3
<|MaskedSetence|> This connection differs from the ones used in the recent results of [BKSW23, Beh23]. For example, the dynamic (2−Ω⁢(1))2Ω1(2-\Omega(1))( 2 - roman_Ω ( 1 ) )-approximate algorithms in [BKSW23, Beh23] are inspired by the two-pass streaming algorithms (e.g. [KMM12]). <|MaskedSetence|> It is obtained using sublinear algorithms to improve the (3/2)32(3/2)( 3 / 2 )-approximation guarantee of the tight instances of EDCS. <|MaskedSetence|> Below, we explain the overview of our sublinear algorithm, which consists of two key ingredients, and then explain how our dynamic algorithm easily follows..
**A**: Then, they use sublinear algorithms [Beh22] to implement this streaming algorithm in the dynamic setting efficiently.666The dynamic (3/2−Ω⁢(1))32Ω1(3/2-\Omega(1))( 3 / 2 - roman_Ω ( 1 ) )-approximate algorithm in [Beh23] does not have explicit relationship to streaming algorithms. **B**: In contrast, it is our sublinear algorithm, not dynamic algorithm, that is inspired by the O⁢(1)𝑂1O(1)italic_O ( 1 )-pass streaming algorithm [McG05]. **C**: 2 Technical Overview Our high-level approach is based on the interconnection between dynamic, sublinear, and streaming algorithms.
CAB
CAB
BCA
CAB
Selection 4
<|MaskedSetence|> Each snapshot of visual data includes an RGB or color image, left and right infrared images (obtained by imagers), a depth map, a point cloud, and inertial measurement unit (IMU) data. The IMU data is collected by two sensors: an accelerometer and a gyroscope. The features are captured by using Robot Operating System (ROS) and rosbag (.bag) file format. A summary of all the features of the camera is given in Table II. The frequency of different features is chosen on the basis of the maximum bandwidth of the USB-2 bus on the laptop. <|MaskedSetence|> <|MaskedSetence|>
**A**: For future dataset creation, we instead suggest choosing a common fps for the RGB, infra cameras and depth map, as well as a common frequency for accel, and gyro. **B**: III-A Vision system An Intel® RealSense™ D435i camera connected to a laptop on the robot is used as the input for the vision system. **C**: This would simplify usage and a reduced fps would potentially allow higher resolution. .
BAC
CBA
BAC
BAC
Selection 4
<|MaskedSetence|> Heatmaps, as shown in Fig. <|MaskedSetence|> The attention score of each patch was normalized to the range [0, 1], with high scores indicating regions crucial for diagnosis. <|MaskedSetence|> These heat maps were overlaid onto the original WSIs. .
**A**: RGB color maps were used to enhance the clarity, with red representing high attention and blue indicating low attention. **B**: 4, were generated using a patch size of 224×224224224224\times 224224 × 224 and 90% overlay. **C**: 4.5 Interpretability and whole slide attention visualization To assess the effectiveness of the proposed model in capturing the morphological features of prostate cancer, a random subset of the WSIs was selected from the test set.
CBA
BAC
CBA
CBA
Selection 1
where U𝑈Uitalic_U is some matrix that depends on A−Ar𝐴subscript𝐴𝑟A-A_{r}italic_A - italic_A start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and ΩΩ\Omegaroman_Ω. Although it is possible to derive a relative upper bound from Theorem 1.2 of Keshavan et al. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The optimization problem is formulated as least squares minimization subject to nuclear norm or max norm constraints. Their theoretical results follow from generic generalization guarantees based on the Rademacher complexity. Specifically, Theorem 6 of Foygel and Srebro (2011) implies the following additive upper bound .
**A**: (2010b), it requires very strong assumptions about the coherence, the aspect ratio (n/m𝑛𝑚n/mitalic_n / italic_m), the condition number of Arsubscript𝐴𝑟A_{r}italic_A start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and the r𝑟ritalic_r-th singular value of A𝐴Aitalic_A. **B**: Thus, the bound derived from Theorem 1.2 of Keshavan et al. **C**: (2010b) is significantly more restricted than the bound proved here. Foygel and Srebro (2011) study the problem of matrix completion from the view point of supervised learning.
ABC
CBA
ABC
ABC
Selection 4
<|MaskedSetence|> We show that any eventually positive system is also positive with respect to some cone, which however may be hard to characterize explicitly or it may be hard to use. <|MaskedSetence|> We then consider positive input-output systems, which are characterized by eventually positive trajectories of the states and positive outputs. <|MaskedSetence|> In fact, it was shown that a subset of scaled diagonally dominant matrices admit diagonal Lyapunov functions [13], which can be computed by linear programming/algebra [14]. These results were then extended to block-partitioned matrices in [15]. Eventually positive systems can be considered as a complimentary extension of positive systems with a different set of retained properties..
**A**: We also derive straightforward formulas to compute invariant cones and Lyapunov functions for eventually positive systems under some additional assumptions on the system matrix. **B**: We first consider dynamical eventually positive systems, which are studied in [12] from the linear algebra point of view and we offer a control-theoretic one. **C**: We extend some of the properties of positive systems to this case such as: computation of induced norms, properties of energy functions and discuss implications for model reduction. It is worth mentioning that some of the strong properties of positive systems exploit scaled diagonal dominance of the system matrices rather than nonnegativity of the trajectories.
ACB
BAC
BAC
BAC
Selection 3
2 Related work The strength of McDiarmid’s inequality lies in its applicability (see [10] for an extensive survey): set 𝒳𝒳{\cal X}caligraphic_X may be completely arbitrary, and, even when f𝑓fitalic_f is involved, it is usually easy to check that the bounded differences assumption holds. Two notable applications are combinatorics and learning theory. <|MaskedSetence|> <|MaskedSetence|> An overview of how McDiarmid’s inequality can be applied to several problems in information theory is found in [13]. <|MaskedSetence|>
**A**: Two representative results are the concentration of the chromatic number of Erdos-Rényi graphs [1], and the fact that stable algorithms have good generalization performance [2]. **B**: Namely, if the output of a learning algorithm does not change too much when a single training example is modified, then it performs well on an unseen example. **C**: For instance, [8] applies McDiarmid’s inequality to solve the problem of channel resolvability. .
ABC
ABC
ABC
BCA
Selection 1
Such additive pseudo-solutions are naturally connected to the method of alteration in the probabilistic method, where we delete/alter some items in a random structure to establish a desired property [3]. Additive pseudo-approximations have appeared implicitly in prior algorithms, e.g., [31, 28]. <|MaskedSetence|> Gven a q𝑞qitalic_q-additive pseudo-solution, we can often “fix” the q𝑞qitalic_q additional items in some problem-specific way. <|MaskedSetence|> This strategy was used in the k𝑘kitalic_k-median approximation algorithm of [31]. We use it here for our (true) approximation algorithm for knapsack center. <|MaskedSetence|>
**A**: These problems are described in Section 1.3.. **B**: If q𝑞qitalic_q is small, this may incur only a small overhead in the cost or computational complexity. **C**: We can summarize some of their advantages here, from both practical and technical points of view. First, additive pseudo-approximation can be a useful tool to obtain true approximations.
ACB
CBA
CBA
CBA
Selection 3
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> In Section 5 we prove that, by means of effective translations, the model update operator can be eliminated and then the path quantifier and temporal operator in state formulas can be replaced by the classical box modality. Using these results, in Section 6 we establish a sound and complete axiomatization of 𝖫𝖳𝖢𝖫𝖳𝖢\mathsf{LTC}sansserif_LTC and show its decidability. We conclude with brief remarks in Section 7. .
**A**: Section 4 proposes and discusses our formal solution to the Sea Battle Puzzle, based on its formalisation in 𝖫𝖳𝖢𝖫𝖳𝖢\mathsf{LTC}sansserif_LTC. **B**: In Section 3 we show that the update operator in 𝖫𝖳𝖢𝖫𝖳𝖢\mathsf{LTC}sansserif_LTC properly captures temporal conditionals. **C**: Structure of the paper In Section 2 we introduce the logic of temporal conditionals 𝖫𝖳𝖢𝖫𝖳𝖢\mathsf{LTC}sansserif_LTC and provide and discuss its formal semantics.
CBA
BCA
CBA
CBA
Selection 4
they called polyadic codes, by defining the m𝑚mitalic_m-adic residue codes (see A4 ). <|MaskedSetence|> In A1 , by virtue of generator polynomials over finite fields, Job defined m𝑚mitalic_m-adic residue codes. <|MaskedSetence|> Chen et al. <|MaskedSetence|>
**A**: studied the. **B**: In Ling , Ling and Xing extended the definition of polyadic cyclic codes to include noncyclic abelian codes, and obtained necessary and sufficient conditions for the existence of non-degenerate polyadic codes. **C**: Pless studied the idempotent generators of the polyadic codes in A2 .
ACB
CBA
CBA
CBA
Selection 4
<|MaskedSetence|> The exception here is any path which contains a proxy, a variable which is generated by a protected variable and not considered a resolving variable . However the path then is only considered to exhibit potential proxy discrimination, which then has to be checked by doing an intervention [30]. Taking the same example as for unresolved discrimination, candy crush is not a resolving variable as its sole purpose in the model is the approximation of sex using it. <|MaskedSetence|> <|MaskedSetence|>
**A**: This approach, compared to detecting unresolved discrimination, is managed more easily as we only consider paths an issue if it is blocked by a proxy. **B**: Every proxy is labelled manually, therefore the number of problematic paths reduced. . **C**: For the second issue, again consider all paths from the protected variable to the classifier, but assume all are not discriminating.
CAB
ABC
CAB
CAB
Selection 4
<|MaskedSetence|> One is based on regularity results for the corresponding Kolmogorov equation and Malliavin calculus; see, e.g., [BD18, Deb11]. <|MaskedSetence|> Indeed, the two approaches mentioned above are not applicable as they rely strongly on the smoothing effect of the semigroup. <|MaskedSetence|> A special case of our main result is presented in the following theorem. .
**A**: The other is based on more elementary regularity results of the Kolmogorov equation and the mild Itô formula; see, e.g.,  [CJK19, HJK16, JK21]. No successful approach for proving optimal weak convergence rates has been developed yet for temporal discretisations of hyperbolic SPDEs with multiplicative noise. **B**: In this work we tackle this problem and develop a technique that allows one to establish optimal weak convergence rates for hyperbolic SPDEs with multiplicative noise. **C**: Roughly speaking, there are two successful approaches to obtain optimal weak convergence rates for parabolic SPDEs with multiplicative noise.
CAB
CAB
CAB
BCA
Selection 2
Our proposed algorithmic instantiation of HiLAD for batch data setting (HiLAD-Batch) and recent work on feedback-guided AD via online optimization (Feedback-guided Online) (?) both build upon the same tree-based model proposed by (?). <|MaskedSetence|> <|MaskedSetence|> Feedback-guided Online also adopts the greedy query selection strategy mainly because it works well in practice. In this work, we present the fundamental reason why greedy query selection strategy is label-efficient for human-in-the-loop learning with tree-based anomaly detection ensembles. Anomaly detection for streaming data setting has many real-life applications. Under this setup, data is sent continuously as a stream to the anomaly detector. (?) present a broad overview of this setup and categorize the outlier detection techniques into three sub-categories: (a) evolving prediction models, (b) distance based outlier detection for sliding windows, (c) methods for high-dimensional data streams. Our proposed approach falls under the class of distance based outlier detection in a sliding window. Existing approaches can detect either local anomalies (?, ?, ?) or global anomalies (?, ?, ?). (?) proposed a tree-based streaming anomaly detector. The key idea is to create a sketch (or summary) of the data using a random cut forest and to report a data point as anomaly when the displacement score is high. <|MaskedSetence|>
**A**: Therefore, there is no fundamental difference in their performance (7.3). **B**: The uniform initialization of weights employed by both HiLAD-Batch and Feedback-guided Online plays a critical role in their overall effectiveness. **C**: However, unlike our work, none of these prior streaming anomaly detection methods incorporate human feedback. .
ABC
ACB
ABC
ABC
Selection 3
<|MaskedSetence|> The dataset was constructed from small, low-resolution, full-color photographs, in ten categories: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. Each image is represented as a 32×\times×32 array of pixels in three color channels: RED, GREEN, BLUE. There are 50,000 images in the training set and 10,000 images in the test set. We performed global contrast normalization on the images in the training set, and then scaled the normalized image intensity down to a real number in the range [0, 1], but we did no other preprocessing, such as whitening (Krizhevsky, 2009). <|MaskedSetence|> <|MaskedSetence|> Our plan is to proceed along the same path as in Figure 11, but only as far as the lower-right corner, as we did with MNIST, to produce a 12-dimensional encoding of the patches. The main difference is that we are now working with three color channels instead of one monochrome channel. We will also introduce two new constructs in this Section that were not part of our analysis of the MNIST dataset in Section 6: quotient manifolds and product manifolds. .
**A**: To facilitate a straightforward comparison to our results on the MNIST dataset, we defined a two-pixel border around each image in the training set and sampled the set of 7×\times×7 patches within the 28×\times×28 interior, drawing 12 samples per image. **B**: We thus started out with a sample that is comparable to the sample in Figure 11: 600,000 7×\times×7 patches. **C**: The second example that we will study is the CIFAR-10 dataset (Krizhevsky, 2009), a classification task which is generally considered to be much harder than MNIST.
CAB
CAB
CAB
CBA
Selection 2
<|MaskedSetence|> (2018). <|MaskedSetence|> <|MaskedSetence|> (2014) of 0.3 is enforced during training. The tagger was trained for 10 epochs. .
**A**: The Adam (Kingma and Ba, 2015) optimizer is used for training and a dropout rate Srivastava et al. **B**: 4.2 Training Setup and Hyperparameters For the morphological tagger, we use the baseline implementation from Malaviya et al. **C**: This implementation uses an input layer and linear layer dimension of 128 and a 2-layer LSTM with a hidden layer dimension of 256.
BCA
BCA
BCA
CBA
Selection 2
For learning based coding strategy, 5,000 (500 images per class) images are randomly sampled from database set to construct training set. <|MaskedSetence|> We utilize a pre-trained Alexnet as the backbone network. <|MaskedSetence|> We set weight-decay as 5×10−55superscript1055\times 10^{-5}5 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT and mini-batch size as 128. β𝛽\betaitalic_β will be enlarged by a factor of 1.005 per epoch to reduce quantization error. <|MaskedSetence|>
**A**: For these methods, all experiments are run five times with different random seeds and average accuracy is reported. . **B**: We set initial learning rate as 0.05 and reduce it to 0.025 after 200 epochs (we train our algorithms 400 epochs totally). **C**: And we resize all images to 224×224224224224\times 224224 × 224 and use the raw pixels as the inputs.
CBA
ACB
CBA
CBA
Selection 3
The goal of connectomics is to reconstruct the neural wiring diagram from Electron Microscopic (EM) images of the animal brain to improve the understanding of neuropathology and intelligence. <|MaskedSetence|> Manual labeling of synapses can be extremely hard because (1) there are approximately one billion synapses in a 1m⁢m3𝑚superscript𝑚3mm^{3}italic_m italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT cube of a mouse brain, and (2) the labeling of synapses requires expertise and cannot be crowdsourced. Therefore, a good labeling system of synapses should be semi-automatic and only provide informative samples to the domain experts to improve the labeling efficiency. <|MaskedSetence|> In total, there are 4,000 image patches, half of them containing a synapse at the center of the image, while the other half do not contain synapses. In this study, we show how our system helps experts classify synapse images and non-synapse images without any labeled training set and pre-specified domain knowledge. CNN-based approaches have achieved state-of-the-art performance on image classification tasks [38, 26]. <|MaskedSetence|> First, because the model space of CNNs can be huge, the model can easily overfit the training set and have poor performance on the test set, which requires a large training set. Second, the features extracted over convolutional layers are hard to interpret, which restricts the understanding of the discriminative features, especially for scientific applications where the expert wants to have a full understanding of the model..
**A**: However, there are still two main shortcomings of CNN-based methods. **B**: A synapse is a functional structure that enables signal transfer from one neuron to the other, which connects individual neurons into a complex network. **C**: To showcase the effectiveness of our proposed approach, we applied the annotation system to a high-resolution EM image dataset generated by a multi-beam scanning electron microscope111Appendix A provides a visual overview of the Synapse Detection dataset..
BCA
BCA
BCA
ACB
Selection 1
<|MaskedSetence|> To the best of our knowledge, this article develops these properties for two-parameter generalized entropy first time in literature. This article is distributed as follows. <|MaskedSetence|> Section 3 is dedicated to two-parameter generalized relative entropy and its properties. <|MaskedSetence|> Then we conclude the article comparing similar properties of Shannon, Tsallis and two-parameter generalized entropy..
**A**: The similar properties for the Tsallis entropy and divergence are investigated in detail [23], [24], [25]. **B**: In section 2, we define the joint entropy and the conditional entropy to present a number of properties of two-parameter generalized entropy as well as the chain rule. **C**: We discuss the information geometric aspects of entropy in section 4.
ABC
ABC
BCA
ABC
Selection 4
<|MaskedSetence|> <|MaskedSetence|> At a fixed sample size, all testing powers gradually decrease as p𝑝pitalic_p increases. The proposed method using any of MGC, DCorr, or HSIC maintains relatively stable power with slow degradation in the case of linear dependence. The same trend is observed for nonlinear dependence, although the degradation is faster, with the MGC statistic performing the best. <|MaskedSetence|>
**A**: where ⊙direct-product\odot⊙ denotes element-wise multiplication. **B**: It is worth emphasizing that due to the consistent property, if we fix p𝑝pitalic_p and let n𝑛nitalic_n increase, the testing power for our method shall increase to 1111.. **C**: We intentionally design the matrix D𝐷Ditalic_D as a decaying weight, reflecting a meaningful multivariate simulation where additional dimensions contain weaker dependence signals. Figure 5 illustrates the testing power as dimensionality increases.
ACB
ABC
ACB
ACB
Selection 1
V-A2 Discriminative models Discriminative models were represented by two classifiers in our work: logistic regression (LR) and linear support vector machine (LSVM). <|MaskedSetence|> The squared hinge loss function was used for LSVM classifier. Performance results on complete genomes are shown in Figures 2 and A.1 for each model. In genotype taxonomic classification, LR and LSVM models classify the data with near perfect weighted F-measures (>0.989±0.015absentplus-or-minus0.9890.015>0.989\pm 0.015> 0.989 ± 0.015) across all k-mer lengths. <|MaskedSetence|> <|MaskedSetence|> The maximum weighted F-measure value of 0.975±0.019plus-or-minus0.9750.0190.975\pm 0.0190.975 ± 0.019 is reached by all discriminative models in subtyping. In general, L2-based models perform better than L1-based models which is clearly seen specially when k>8𝑘8k>8italic_k > 8..
**A**: For both classifiers, we evaluated L1 and L2 penalties for regularization. **B**: The best performance is shown by LSVM using L1-based regularization with k∈[9,10]𝑘910k\in[9,10]italic_k ∈ [ 9 , 10 ] (weighted F-measure =1.000±0.000absentplus-or-minus1.0000.000=1.000\pm 0.000= 1.000 ± 0.000). **C**: In subtype classification, LR and LSVM model performances decrease slightly although the weighted F-measures remain greater than 0.9410.9410.9410.941 for all experiments (see Table II).
ABC
ABC
ABC
BCA
Selection 1
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Combining i∉N𝑖𝑁i\notin Nitalic_i ∉ italic_N with j∉N𝑗𝑁j\notin Nitalic_j ∉ italic_N, we obtain N∩{i,j}=∅𝑁𝑖𝑗N\cap\left\{i,j\right\}=\varnothingitalic_N ∩ { italic_i , italic_j } = ∅. It remains to prove that N∪{i,j}∈ℱ𝑁𝑖𝑗ℱN\cup\left\{i,j\right\}\in\mathcal{F}italic_N ∪ { italic_i , italic_j } ∈ caligraphic_F. Note that |{a,b}|=2𝑎𝑏2\left|\left\{a,b\right\}\right|=2| { italic_a , italic_b } | = 2 and |{i,j}|=2𝑖𝑗2\left|\left\{i,j\right\}\right|=2| { italic_i , italic_j } | = 2 (since i𝑖iitalic_i and j𝑗jitalic_j are distinct). Hence,.
**A**: Likewise, j∉N𝑗𝑁j\notin Nitalic_j ∉ italic_N. **B**: In other words, i∉N𝑖𝑁i\notin Nitalic_i ∉ italic_N. **C**: Hence, we cannot have i∈N𝑖𝑁i\in Nitalic_i ∈ italic_N.
CBA
CBA
CBA
ACB
Selection 1
<|MaskedSetence|> It facilitates multi-class modelling in a regression setting, allowing the surrogate to capture the interactions between multiple classes, hence explain them coherently. Each node of such a tree approximates the probabilities of every explained class – which level of detail is impossible to achieve with surrogate multi-class classifiers – thus reflecting how individual interventions in the interpretable domain affect the predictions. Figure 2 shows an example of a surrogate multi-output regression tree. This is a significant improvement over training a separate regression surrogate for each explained class, which may produce diverse, inconsistent, competing or contradictory explanations – thus risk confusing the explainees and put their trust at stake – whenever these models do not share a common tree structure or split on different feature subsets. <|MaskedSetence|> <|MaskedSetence|> While surrogate regression trees that approximate the probability of a single class are guaranteed to output a number within the [0,1]01[0,1][ 0 , 1 ] range – since the estimate is calculated as an average – this may not necessarily hold for multi-output trees. Approximating probabilities of multiple classes by averaging their values across a number of instances may yield estimates whose sum is greater than 1111, nonetheless these values can be rescaled to avoid confusing the explainees. .
**A**: We address the challenge of simultaneously explaining multiple classes of a prediction output by a probabilistic model by proposing a first-of-a-kind surrogate explainer based on multi-output regression trees. **B**: Trees neither presuppose independence of features nor existence of a linear relationship between them and the target variable. **C**: Our contributions establish a new direction in XAI research – concerned with consistent and faithful explanations of multiple classes – and offer a pioneering method to address this challenge. Moreover, using decision trees (Breiman et al., 1984) as surrogates overcomes the shortcomings identified when linear models are used to this end (Sokol et al., 2019; Sokol and Flach, 2024; Sokol, 2021).
ACB
BAC
ACB
ACB
Selection 4
For example, Yuan et al. (2019b) trained two encoder-decoder networks to focus on global contexts and fine details, respectively, for smoke region segmentation. Hu and Lu (2018) trained spatial-temporal ConvNets using multi-task learning. Liu et al. (2019) classified smoke by fusing ResNet and ConvNet trained with the original RGB and Dark Channel images (He, Sun, and Tang 2010), respectively. Other work applied or enhanced object detectors, such as SSD (Liu et al. <|MaskedSetence|> 2016), Faster R-CNN (Ren et al. <|MaskedSetence|> 2019a; Zhang et al. <|MaskedSetence|> 2018; Lin et al. 2019). These works were evaluated on small datasets (Table 1), and none of them collaborated with local communities in air quality advocacy..
**A**: 2018; Yang et al. **B**: 2016), MS-CNN (Cai et al. **C**: 2015), and YOLO (Redmon and Farhadi 2017), to identify regions that have smoke (Xu et al.
BCA
BCA
BCA
BAC
Selection 1
We implement the proposed universal graph compressor (UGC) in four widely used benchmark graph datasets: protein-to-protein interaction network (PPI) [46], LiveJournal friendship network (Blogcatalog) [47], Flickr user network (Flickr) [47], and YouTube user network (YouTube) [48]. <|MaskedSetence|> We present in Table II the compression ratios of four competing algorithms222Note that CSR and Ligra+ are designed to enable fast computation, such as adjacency query or vertex degree query, in addition to compressing the matrix. <|MaskedSetence|> In Fig. 1, we combine the comparisons of the compression ratios in Tables I and II in the logarithmic scale. Figure 1: Log-scale comparisons of the compression ratios for the proposed universal graph compressor with other competing compressors. See Tables I and II for the exact compression ratios. <|MaskedSetence|> .
**A**: Our proposed compressor does not possess such functionality and is designed solely for compression purposes.. **B**: The submatrix decomposition size k𝑘kitalic_k is chosen to be 1,2,3,412341,2,3,41 , 2 , 3 , 4 and we present in Table I the compression ratios (the ratio between output length and input length of the encoder) of UGC for different choices of k𝑘kitalic_k. **C**: The column numbers of highlighted entries in Table I indicate the optimal k𝑘kitalic_k value for UGC we used in this plot.
BAC
ACB
BAC
BAC
Selection 3
Intuitively, both a high density of dispersed diblock polymers or a high energy associated to an isolated diblock molecule correspond to a high rate of absorption of the dispersed polymers onto the bilayer interface. The arrival rate is a key quantity controlling defect formation. When the arrival rate is slow, the bilayer interface can grow in size to accommodate the new mass. <|MaskedSetence|> At moderate rates, a pearling bifurcation can be triggered, the onset of which is well understood within the context of the FCH gradient flow, [12]. <|MaskedSetence|> <|MaskedSetence|> The endcaps form most readily at points of high curvature of the bilayer interface. The stem of the endcap can grow,.
**A**: The pearling can be transient, subsiding as the dilute suspension of amphiphilic material is consumed. **B**: The growth process is adiabatic and has been studied rigorously, [6], deriving a motion against curvature, regularized by a higher order Willmore term that includes surface diffusion. If the rate of arrival increases beyond a critical threshold, then defects, such as pearling, endcaps, and loop formation are observed. **C**: The pearling can also be lead to the formation of end-cap type defects, essentially micelles that remain connected to the underlying structure from which they emerged.
BCA
BAC
BAC
BAC
Selection 3
Our study considers not only direct contacts of the confirmed cases but also the contacts of those contacts and so on, which we call multilevel digital contact tracing [30]. It helps to identify potential transmission chains more comprehensively and quickly, thus mitigating outbreaks more effectively (details in Appendix sec. <|MaskedSetence|> Our algorithm processes the contact data and dynamically evolves a close contact graph. The nodes in the close contact graph are the individual users, and an edge is included if an interaction between a pair of people exists. <|MaskedSetence|> In other words, during a pandemic, the contact graph stores the latest D𝐷Ditalic_D days of continuous close contact data in a discrete form inside circular contact queues. Finally, the system prepares the contact list for a given infected person from the contact graph. <|MaskedSetence|> The system’s salient feature is that it automatically removes the inactive edge (contact) when the D𝐷Ditalic_D days over and updates the contact queues. We provide an analytical approach for the completeness of the algorithm and validate it using synthetic and empirical data sets. As a case study, our analysis reveals that for COVID-19 contact trace parameters, storing the contact graph for 14141414 days for 106superscript10610^{6}10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT users takes 5555 GB of memory space, and the preparation of the contact list for a given set of infected persons depends on the size of the infected list. Our algorithm is simple and easy to implement. We expect it to be an attractive choice to deploy in the application of digital contact traces in real-world pandemic situations..
**A**: Our algorithm prepares the direct and indirect (multilevel) contact lists of the infected persons and prepares the infection pathways. **B**: 1). This article provides a framework to automate multilevel digital contact tracing. **C**: Importantly, to store the temporal close contact information, we introduce an edge label between a pair of individuals in the contact graph as a fixed size binary circular contact queue.
CBA
BCA
BCA
BCA
Selection 4
<|MaskedSetence|> The first method is specifying the instances of interest. By running set_highlight_instance() in the code block, the user-defined instances will be highlighted in the projection. The second method is brushing instances in the projection view. <|MaskedSetence|> <|MaskedSetence|> However, if users focus on a specific cluster that usually contains a lot of points, showing all the selected points with black edges will cause visual clutter. Thus, we fade the dots in unselected clusters to highlight the selected cluster as shown in Figure 5c. .
**A**: By default, any selected dots are highlighted with a black edge as shown in Figure 5-a, b. **B**: The third method is clicking the centroid points (squares in the projection) to highlight all the instances in the selected cluster. These selection methods allow users to quickly acquire their desired set of explanations for analysis. **C**: As shown in Figure 5, users can define subpopulations in three ways (R2).
CBA
BAC
CBA
CBA
Selection 4
Secondly, based on the theoretical results, we propose an algorithm for finding deep NN approximations of stable manifolds. <|MaskedSetence|> <|MaskedSetence|> We randomly choose a number of samples along each trajectory, and adaptively select additional samples near points with large errors from the previous round of training. Our approach is causality-free and does not depend on discretizing the space, making it suitable for high-dimensional problems. Causality-free algorithms have been successful in various applications. See e.g. <|MaskedSetence|> Our method is based on the stable manifold, an intrinsic geometric property of the HJB equation. With this framework, we can ensure the stability of the closed loop from the controller generated by the trained NN satisfying certain accuracy. There are few theoretical results on this topic in the literature. In empirical algorithms, the ‘equilibrium’ of the closed loop system from the NN may become unstable or disappear as time goes to infinity, as shown in [28]. Moreover, our method is different from that in [28], which devises certain architectures for approximate NN to stabilize the system. It is worth noting that in [36, 35, 5], the algorithms are based on an iterative procedure in a small neighborhood of the equilibrium, which is difficult to estimate the accuracy and is time-consuming to generate trajectories..
**A**: Specifically, we solve two-point boundary value problems (BVPs) locally near the equilibrium and extend the local solutions using initial value problems (IVPs) for the characteristic Hamiltonian system. **B**: One of the crucial aspects of this algorithm is a composite loss function that incorporates the maximum error, the mean error of the NN from the exact stable manifold on the sample set, and the error between the derivative of the NN at the origin and the stabilizing solution of the Riccati equation (as shown in equation (4.1) below). Another crucial issue is adaptive data generation by solving the characteristic Hamiltonian systems which is inspired by [27]. **C**: [22, 24, 23, 39, 7, 13]. Our approach differs from those focused on solving the HJB equations, e.g., [37, 27].
BAC
BAC
BAC
BAC
Selection 3
The rest of this article is organized as follows. <|MaskedSetence|> Section III develops the algorithm CPCA. Section IV analyzes the accuracy and complexities of the proposed algorithm. <|MaskedSetence|> Section VI presents the simulation results. <|MaskedSetence|>
**A**: Further discussions on application scenarios and the algorithmic structure are given in Section V. **B**: Finally, Section VII concludes this article. . **C**: Section II describes the problem of interest and provides preliminaries.
BCA
CAB
CAB
CAB
Selection 4
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Different from DND, each noisy image in SIDD comes with an almost noise-free counterpart as ground truth, which is estimated by some statistical methods [65]. Further, SIDD also provides a small version dataset containing 320 image pairs, called SIDD-Medium, which is commonly used as training data in recent works [27, 23, 24]. To compare with them fairly, we also train VIRNet only based on the SIDD-Medium dataset. As for the metrics, we adopt PSNR and SSIM [85] calculated on the sRGB space to quantitatively evaluate different methods. TABLE IV: Comparisons of different methods in terms of denoising performance, the number of model parameters (in M), FLOPs (in G),.
**A**: SIDD333https://www.eecs.yorku.ca/~kamel/sidd/benchmark.php is another real-world denoising benchmark, containing about 30,000 real noisy images captured by 5 cameras under 10 scenes. **B**: DND222https://noise.visinf.tu-darmstadt.de consists of 50 high-resolution images with realistic noise from 50 scenes taken by 4 consumer cameras, but it does not provide any other noisy/clean image pairs as training data. **C**: In this part, we evaluate the performance of VIRNet on two widely used real-world benchmark datasets, namely DND [84] and SIDD [65].
CBA
CBA
BCA
CBA
Selection 4
<|MaskedSetence|> We first sample category-specific meshes from a graspable objects’ meshes dataset, which will result in a dataset of meshes of a number of objects from the same category. <|MaskedSetence|> To render images, we sample 20 different cameras from a hemisphere, where the camera angles are specified by the extrinsics R𝑅Ritalic_R and t𝑡titalic_t. <|MaskedSetence|> Using the 20 different camera views, we render 20 synthetic RGB and depth images of the object in the scene. .
**A**: To achieve this, we generate synthetic training data for LSM. **B**: For the camera intrinsics parameter, we use the intrinsics of a PhoXi depth camera, such that each output depth image is effectively taken by a synthetic PhoXi depth camera, which was the camera we used when we trained Dex-Net. **C**: Afterwards, we load the meshes to PyRender, and apply randomized vertex and face colors to each mesh.
ACB
ACB
BAC
ACB
Selection 4
We generalize the framework presented in Bolukbasi et al. (2016) and cast it to a non-linear setting by exploiting its relationship to PCA. <|MaskedSetence|> (2016) is to kernelize it analogously to Schölkopf et al. <|MaskedSetence|> <|MaskedSetence|>
**A**: (1997), which is the kernelized generalization of PCA. Our approach preserves all the desirable formal properties presented in the linear method of Bolukbasi et al. **B**: Thus, the natural extension of Bolukbasi et al. **C**: (2016). .
CBA
BAC
BAC
BAC
Selection 4
<|MaskedSetence|> Independently, [22] proposes a modification of EXP4 that achieves a high probability guarantee, which, however, necessitates changes in the reward estimates. High probability simple regret in the original form of EXP4 has not yet been explored. Furthermore, while simple regret has been extensively studied, recent focus has shifted to cumulative regret since it characterizes global optimality, even in the stochastic contextual setting [17]. Global optimality is especially important considering global exploration in RL, which has not yet been studied for EXP4, adding additional importance and relevance to our efforts. To this end, in this paper, we are the first to propose a new algorithm, EXP4.P, based on EXP4, that does not alter the reward estimates in bandits with unbounded rewards. We demonstrate that its optimal simple regret holds with high probability and in expectation for both linear contextual bandits and stochastic contextual bandits, where the rewards may be unbounded. Extending the proof to this unbounded context is non-trivial, necessitating the application of deep results from information theory and probability. <|MaskedSetence|> <|MaskedSetence|> As a by-product, this analysis also enhances EXP3.P to yield comparable outcomes for MAB. Moreover, we also establish an upper bound on the cumulative regret in the linear case, which not only closes the existing gap, but also shows the advantage of having good enough experts for global exploration. The upper bounds for unbounded bandits necessitate a sufficiently large T𝑇Titalic_T, and we provide a worst-case analysis suggesting that no sublinear regret is attainable below a certain instance-specific minimum T𝑇Titalic_T, through our novel construction of instances..
**A**: Moreover, for EXP4, the expected simple regret is proven to be optimal in the contextual bandit scenario in [3]. **B**: This includes establishing high-probability regret bounds in the bounded case with exponential terms and leveraging Rademacher complexity theory and sub-Gaussian properties to capture arm selection dynamics in the unbounded scenarios. **C**: Synthesizing these elements is highly technical and introduces new concepts.
ABC
ABC
BCA
ABC
Selection 2
<|MaskedSetence|> One or more generalized rings generate a non-trivial absorbing set. <|MaskedSetence|> <|MaskedSetence|> The ingredients of a reduced form are generalized rings, sets consisting of one non-singleton coalition (called “fixed components”), and a set of singletons. Consider again the game in Table 1 presented in the Introduction. Recall that this game has a unique non-trivial absorbing set. Furthermore, there are two generalized rings, {13,12,23}131223\{13,12,23\}{ 13 , 12 , 23 } and {45,46,56}454656\{45,46,56\}{ 45 , 46 , 56 }, and: .
**A**: We have shown a close connection between non-trivial absorbing sets and generalized rings. **B**: However, not all generalized rings can generate a non-trivial absorbing set. **C**: The concept of reduced form of a coalition formation game enables us to distinguish between those generalized rings that generate a non-trivial absorbing set and those that do not.
ABC
ABC
CBA
ABC
Selection 4
<|MaskedSetence|> <|MaskedSetence|> Nonetheless, our experiments are restricted to Wikipedia corpora. This data is naturally limited. For instance, while dialog utterances may rely on extra-linguistic clues, sentences in Wikipedia cannot. Furthermore, due to its ample audience target, the text in Wikipedia may be over descriptive. <|MaskedSetence|>
**A**: Future work should investigate if similar results apply to other corpora.. **B**: Limitations This work focuses on proposing new information-theoretic approximations for both lexical ambiguity and bidirectional contextual uncertainty and on positing that these two measures should negatively correlate. **C**: In this experiment section, we tested the hypothesis on a set of typologically diverse languages.
CBA
BCA
BCA
BCA
Selection 2
<|MaskedSetence|> <|MaskedSetence|> In [5, 12] it was shown for the optimisation of mutual information that the optimiser depending on the channel cannot be calculated with the help of Turing machines even for discrete memoryless channels. Also the capacity-achieving codes cannot be constructed with the help of Turing machines depending on the channel [8]. <|MaskedSetence|> Such behaviour would be particularly interesting for information theory, because then coding and converse cannot be solved algorithmically for such a fixed channel at the same time. Such behaviour was demonstrated in [6] for computable compound channels, in [3] for Gaussian channels with coloured noise and in [4] for Wiener prediction theory. It is currently absolutely unclear whether the behaviour of the zero error capacity is complex enough to obtain this strongest behaviour with respect to non-Turing computability for the zero error capacity. The results from this work could be used in [BBD21, 2] to establish fundamental bounds on the capabilities of computer-aided design and autonomous systems, assuming that they are based on real digital computers. Additionally, the results of this work serve as the foundation for the work [BD24] that examines the computability of the reliability function and its associated functions..
**A**: For the zero error capacity, it is an open problem whether it can assume non-computable values even for computable channels. **B**: In information theory, the Turing computability of solutions has been investigated for further questions. **C**: In [7, 10, 11] it was shown for finite state channels that the capacity is not Turing computable depending on the channel parameters.
BCA
BCA
BCA
BAC
Selection 2
<|MaskedSetence|> <|MaskedSetence|> To make the DP use only poly⁡(k⋅log⁡Δ)poly⋅𝑘Δ\operatorname{poly}(k\cdot\log\Delta)roman_poly ( italic_k ⋅ roman_log roman_Δ ) space, we will use only O⁢(k⋅log⁡Δ)𝑂⋅𝑘ΔO(k\cdot\log\Delta)italic_O ( italic_k ⋅ roman_log roman_Δ ) leaf nodes. <|MaskedSetence|> Furthermore, we design an algorithm that runs in time and space poly⁡(k⋅log⁡Δ)poly⋅𝑘Δ\operatorname{poly}(k\cdot\log\Delta)roman_poly ( italic_k ⋅ roman_log roman_Δ ) and finds an (α2+ε)subscript𝛼2𝜀(\alpha_{2}+\varepsilon)( italic_α start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + italic_ε )-approximate estimation for each new leaf and each DP subproblem associated with it. Finally, we apply the DP using such leaves as basic subproblems to obtain the estimation. We start in Section 4.1 with a description of this approach in the offline setting, and then present its streaming implementation in Sections 4.2, 4.3 and 4.4. We then complete the proof of Theorem 1.1 in Section 4.5. .
**A**: Our approach for the streaming algorithm relies on a novel modification of the known PTAS for SFP in the offline setting [BH12, BKM15], which is based on dynamic programming (DP). **B**: Then, since each internal node in the quadtree has degree 4444, the total number of squares to consider is O⁢(k⋅log⁡Δ)𝑂⋅𝑘ΔO(k\cdot\log\Delta)italic_O ( italic_k ⋅ roman_log roman_Δ ). **C**: One important reason why the DP requires Ω⁢(n)Ω𝑛\Omega(n)roman_Ω ( italic_n ) space is that Ω⁢(n)Ω𝑛\Omega(n)roman_Ω ( italic_n ) leaves in the quadtree have to be considered as basic subproblems which correspond to singletons.
ABC
ACB
ACB
ACB
Selection 4
<|MaskedSetence|> Approaches such as potential outcomes (Rubin, 1974) and directed acyclic graph (DAG) analysis (Pearl, 2000) allow for causal “treatment effects” to be estimated from observational data by adjusting for the confounding effects of underlying covariates. To demonstrate this, we look specifically at the effect of incarceration on recidivism (under the interpretation of prison time as a “treatment” imposed by the criminal justice system). <|MaskedSetence|> In other words, since the treatment cannot be administered randomly (as in a controlled clinical trial), certain observed or unobserved factors may influence both individuals’ outcomes and their propensities of receiving treatment. When considering the causal effect of incarceration on recidivism, this issue is especially prevalent: individuals who received longer sentences have likely committed more numerous or more serious crimes, potentially indicating a higher propensity to recidivate following release. A naive hypothesis test or regression analysis would therefore misstate the effects of incarceration. <|MaskedSetence|> We demonstrate using two popular approaches from the causal inference literature, outlined below and described in detail in Section 3. .
**A**: As a result, though existing recidivism literature focuses almost exclusively on predictive modeling, we turn our attention to causal inference methods, which enable us to conduct a more nuanced analysis of the social determinants of recidivism. **B**: Fortunately, adjusting for these confounding effects can be possible if we have access to criminal history information. **C**: Prior studies have assessed this relationship, but primarily via traditional econometric techniques, such as instrumental variables and discontinuity analysis, that rely on the identification of a suitable natural experiment (Rose and Shem-Tov, 2021; Loeffler and Nagin, 2022; Stevenson et al., 2023). The fundamental problem in observational causal analysis is that the treatment mechanism may be confounded.
ACB
ABC
ACB
ACB
Selection 4
<|MaskedSetence|> In Section 2, we present the BFD scheme and prove this BFD scheme is a nodal-based DG method. <|MaskedSetence|> This method is then generalized for the Dirichlet boundary conditions and multi-dimensional Heat equation. In A, we analyze the BFD scheme in the eigenvectors space, find the optimal free parameter, and highlight the benefits of using the post-processing filters. <|MaskedSetence|>
**A**: The stability of this DG method has been proven. **B**: B presents the post-processing filters and their implementations. . **C**: This paper is constructed as follows.
ABC
CAB
CAB
CAB
Selection 3
<|MaskedSetence|> Its parameters are optimized using an Adam optimizer, given a noisy image and random noise as its input, which remain fixed during training. <|MaskedSetence|> Our cost function consists of a data term as described in eq. (20), as well as a smoothness regularizer. <|MaskedSetence|> Further implementation details are given in Appendix B (available online). Figure 5: Unrolled cost vs. TV relaxations - qualitative examples..
**A**: A U-Net [43] shaped CNN model with convolution layers added to its skip connections is selected. **B**: We compare images reconstructed using standard TV and our smoothness constraint in each experiment. **C**: In each scenario, a noisy image is generated using Additive White Gaussian Noise (AWGN) with variance σ2superscript𝜎2\sigma^{2}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. The used test data is a combination of 14 grayscale and color images commonly used in image denoising baselines (see Appendix B, available online).
ABC
ACB
ACB
ACB
Selection 3
<|MaskedSetence|> This innovative synthesis allows for the seamless application of the method to astronautics problems, eliminating the need for intermediate steps. Furthermore, it facilitates convenient adjustments of parameters to maintain the desired orbit. To achieve this, we leverage the integrals of motion that define the orbital geometry in the two-body problem [13]. <|MaskedSetence|> <|MaskedSetence|> Initially, we represent the particle’s equations of motion in a radial-transverse-normal (RTN) frame, akin to the Frenet-Serret frame. We introduce a novel approach for constructing a sliding surface, utilizing a linear combination of radial and transverse components for each vector. Proof of the asymptotic stability of this new sliding surface is provided. One key advantage of our approach is its ability to use a single sliding surface and control command for effectively controlling the plane of motion. This simplicity and efficiency make it a compelling candidate for further research in applications where only the plane of motion should be controlled..
**A**: Instead of framing the path following problem in terms of heading, yaw rate, or angle, our approach draws inspiration from the two-body problem and directly incorporates constants that define the conic section into the control law. **B**: This unique approach allows any particle to describe a Keplerian orbit in a path-following guidance manner. Given the prevalence of disturbances encountered in most applications of this control law, such as solar radiation pressure and gravity field non-uniformity, we have designed the path-following law based on sliding mode control theory to achieve robust control. **C**: By controlling some of these constants, the path-following problem is formulated as a regulation problem of the angular momentum and eccentricity vectors of the two-body problem.
ACB
ACB
ACB
BAC
Selection 2
While impressive, these advances also pose a responsibility to the Natural Language Processing (NLP) community to interpret the behavior of the hundreds of attention heads in a single model, and potentially to reduce the number of computations. Responding to this challenge, previous work has taken pioneering steps to discover and to explain the sparseness in the attention patters Vig and Belinkov (2019); Clark et al. (2019); Kovaleva et al. (2019); Yeh et al. <|MaskedSetence|> (2023); Kobayashi et al. <|MaskedSetence|> (2023); Zhang et al. (2023); Li et al. (2023). Here, we argue that as the number of heads grows in the range of thousands, automatic measures would be needed to discover and to impose sparseness to such models. In this paper we introduce a simple task-agnostic data-informed pruning method for attention mechanisms: Attention Pruning. <|MaskedSetence|>
**A**: We train Transformer-based models and we analyze the global observed attention patterns, averaged over all input sequences in the train set, in order to identify and to remove weak connections between input tokens.. **B**: (2023); Biderman et al. **C**: (2023); Ruscio et al.
CAB
CBA
CBA
CBA
Selection 4
<|MaskedSetence|> We call this label smoothing. We then ask if the MLE exists. We address this question in this work, and answer affirmatively. Moreover, in contrast to the previous works, we do not impose a requirement of data separability or the full rank of the data matrix. <|MaskedSetence|> In the case of small datasets, optimizers with a quadratic convergence rate such as Newton-Raphson are typically used. When datasets are very large, as is often the case in many modern datasets, or in the machine learning community, optimizers which are linear in convergence rate are used, an example being gradient descent. <|MaskedSetence|> Prior studies (Freund et al. [2018], Nacson et al. [2019b], Nacson et al. [2019a],Ji and Telgarsky [2019]) on the convergence of gradient descent for logistic regression assume data separability and binary classification. We note that according to the results in Albert and Anderson [1984], Silvapulle [1981], data separability and binary classification imply that the MLE does not exist-therefore these cases are not relevant to our scenario. To address the convergence rate we investigate spectral properties of the Hessian of the MLE and as a consequence we provide the convergence rate in terms of a desired contraction rate. 2 Notation and setup.
**A**: This provides motivation for our study of the optimization of the MLE problem using gradient descent as the optimizer. **B**: One may consider a generalization of the usual multi-class logistic regression by allowing the sample data to belong to all classes, albeit with varying probabilities. **C**: Given that an MLE exists, one typically seeks to find it by using a numerical optimization method.
BCA
BCA
BCA
CBA
Selection 1
<|MaskedSetence|> This may not hold true in practice, where instead the server may only have partial information about the agents’ cost functions. Indeed, this assumption forces the Byzantine faulty agents to a priori fix their cost functions. However, in reality the Byzantine agents may send arbitrary information over time to the server that need not be consistent with a fixed cost function. <|MaskedSetence|> <|MaskedSetence|>
**A**: To prove the necessary condition, we also assume that the server has full knowledge of all the agents’ cost functions. **B**: Thus, necessity of (2⁢f,ϵ)2𝑓italic-ϵ(2f,\epsilon)( 2 italic_f , italic_ϵ )-redundancy under this strong assumption implies its necessity in general. **C**: .
ABC
ABC
BCA
ABC
Selection 1
Holger Boche thanks Martin Bossert for discussions and questions on the theory of the channel reliability function and on questions about the trustworthiness of numerical simulations on digital computers of the channel reliability function. Holger Boche also thanks Vince Poor and Martin Bossert for discussions at ISIT 2019 in Paris. <|MaskedSetence|> The authors acknowledge the financial support by the Federal Ministry of Education and Research of Germany (BMBF) in the programme of “Souverän. Digital. <|MaskedSetence|> Joint project 6G-life, project identification number: 16KISK002. H. Boche and C. <|MaskedSetence|> They were also sup-.
**A**: These discussions initiated the research work whose results are presented in this paper. **B**: Deppe acknowledge the financial support from the BMBF quantum programme QuaPhySI under Grant 16KIS1598K, QUIET under Grant 16KISQ093, and the QC- CamNetz Project under Grant 16KISQ077. **C**: Vernetzt.”.
ACB
ABC
ACB
ACB
Selection 4
<|MaskedSetence|> Specifically, judgements generated by different methods and distances generated by proposed method in application on COVID-19 recognition are provided in Table 10 and 11. <|MaskedSetence|> However, the method proposed in this article differs slightly; we measure the difference between the patient and the standard example. If the distance measurement obtained is small, it is considered that the patient has a higher probability of being ill; if the distance measurement is large, it is considered that the patient has a lower probability of being ill. <|MaskedSetence|>
**A**: The objective of this task is to rank the likelihood of illness for four patients, that is, if a patient’s condition is similar to the given examples of illness, then their probability of being ill is higher; if there is a significant difference from the examples of illness, then it is inclined to consider that the patient’s probability of being ill is lower. **B**: The method of comparison in the table involves scoring the probability of each patient being ill, with a higher score indicating a higher tendency towards illness. **C**: Besides, the visualized results of distances generated by proposed method in application on COVID-19 Recognition is provided in Fig 3..
ABC
ABC
BAC
ABC
Selection 1
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> It aims to improve the learning process by ordering carefully the input examples. Initially, the order was established using prior expert data where tasks that are perceived by humans as easy are given to the learning model first [6]. Later it was suggested to learn the ordering [14]..
**A**: We also theoretically establish the convergence of MR and demonstrate in 1d cases its possible benefits. **B**: Note that some of the above, unlike our method, use an additional clean set, for performing the weighting. Curriculum Learning [6] or self-paced learning [32, 38] is another concept of ordering/weighting examples’ importance. **C**: We differ from existing works in using global weighting during optimization based on MW.
CAB
ABC
CAB
CAB
Selection 3
The Mocha framework [55] trains separate yet related models for each client by solving a primal-dual optimization. It leverages a shared representation across multiple tasks and addresses the challenges of data and system heterogeneity. However, the Mocha framework is limited to regularized linear models. Caldas et al. [56] further studied the theoretical potential of kernelized federated multi-task learning to solve the non-linearity. To solve the suboptimal results, Sattler et al. [57] studied the geometric properties of the federated loss surface. They proposed a federated multi-task framework with non-convex generalization to cluster the client population. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The approaches yield models with more accurate results, better generalization ability, and fairer performance across clients..
**A**: There are two algorithms, termed FedEM and D-FedEM, proposed for the client-server and fully decentralized setting, respectively. **B**: Hence, each client learns personalized mixture weights to obtain its personalized local model. **C**: [59] studies federated multi-task learning under a general assumption that each local data distribution can be seen as a mixture of distributions.
CBA
ABC
CBA
CBA
Selection 4
Iterative generation of molecular graphs Many graph-based models for molecule generation employ some form of iterative decoding. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> (2018) use random order. Some works go beyond a single ordering: Liao et al. (2019) marginalize over several orders, while Mercado et al. (2020) try both random and canonical, and find the latter produces better samples, which is consistent with our unconstrained generation results. Sacha et al. (2020) generate graph edits with the goal of modeling reactions, and evaluate a range of editing orders. Although the task in their work is different, the results are surprisingly close to ours: a fully random order performs badly, and the optimal amount of non-determinism is task-dependent. .
**A**: Often a single arbitrary ordering is chosen: Liu et al. **B**: (2018) first generate all atoms in a one-shot manner, and then generate bonds in BFS order; Jin et al. **C**: (2018; 2020) generate a coarsened tree-structured form of the molecular graph in a deterministic DFS order; and You et al.
CBA
ABC
ABC
ABC
Selection 2
Even if it explicitly targets the design domain, this approach is able to combine the three dimensions of creativity by Boden. <|MaskedSetence|> In particular, a large number of features might be needed. Otherwise, we might lose aspects of the artifacts that are fundamental to correctly quantify creativity. Since it is not possible to know the fundamental features in advance, the method requires one to enumerate as many features as possible. <|MaskedSetence|> In fact, the data points become increasingly “sparse” as dimensionality increases; many techniques (especially clustering) are based on distance, and therefore they may suffer from the curse of dimensionality (Steinbach et al., 2004). <|MaskedSetence|> A possible way to overcome the problems related to feature extraction and the curse of dimensionality might be to adopt deep learning techniques, given their effectiveness with unstructured data. .
**A**: However, the risk is to define an excessive number of non-informative attributes, making the computation of the metrics too computationally expensive. **B**: Nonetheless, it is limited by the fact that artifacts have to be described through an attribute-value pair representation. **C**: Finally, as for classic machine learning techniques, there is the need to manually define and extract the chosen features from unstructured data, which is a time-consuming and potentially prone-to-error activity.
BAC
BAC
ACB
BAC
Selection 2
Construct Validity refer to the concerns between the theory and our results. <|MaskedSetence|> <|MaskedSetence|> To mitigate the threat of multiple identity, we employ quality checks, including removing bots (bots). To mitigate threats relating to the quality dataset, we (1) removed PRs with no response and made sure to (2) leave the longest time window so that we could capture any potential future contributions. <|MaskedSetence|> Lastly, incorrect prediction and classifications could lead to the incorrectness in the result of our hypothesis testing, but we believe the sanity check does give us confidence to mitigate this threat. .
**A**: A key threat to validity is low model performance. Predicting whether or not a contributor will make a future contribution based on the emotions, is not an easy Software Engineering phenomena, as there may be several factorsnot included in our scope. **B**: By including a baseline, we do claim that our models still perform relatively better than a random coin-toss. **C**: We also acknowldge that a response written in non-English language could affect our results.
ABC
ABC
ABC
BCA
Selection 2
Decision rules (or rule sets) show similarity with DTs in the sense that every leaf node of a DT can be translated into a single rule by following the decision path starting from the root node. Bertsimas and Dunn, (2017) aim to find an optimal DT for classification by formulating a MILP problem where single and multi-dimensional half-spaces recursively divide the feature space into disjoint areas during its construction. Verwer and Zhang, (2019) propose a MILP formulation based on binary encoding of features to learn optimal DTs with a predefined depth. McTavish et al., (2022) introduce a method for fast sparse decision tree optimization via smart guessing strategies applicable to any optimal branch-and-bound-based decision tree algorithm. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: These studies depend on solving MILPs or enumerative search strategies, and thus, suffer from intractability on large datasets due to their long-running time requirements. **B**: Other notable works considering DTs include Fırat et al., (2020); Aghaei et al., (2021); Günlük et al., (2021); Alston et al., (2022), and Demirović et al., (2022). **C**: Alternatively, our methodology is based on an LP formulation to achieve scalability. .
BAC
BAC
BAC
CBA
Selection 1
In the last decade, rigorous guarantees on the behavior of SON clustering have been studied by several authors, including Zhu et al. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> (2020); Chi and Steinerberger (2019); Jiang and Vavasis (Preprint, 2020); Sun et al. (2021); Nguyen and Mamitsuka (Preprint, 2021). Most of these works aim at the identification of sufficient conditions for SON clustering to succeed in separating clusters. Our main goal here, stated precisely in Theorem 1.1, is rather to present a seemingly simple clustering problem in which the SON clustering algorithm will typically fail. This requires us to establish necessary and sufficient conditions for the success of SON clustering, which we present in Subsection 1.3. We anticipate that these conditions will be useful in future studies of sum-of-norms clustering, and thus are interesting results in their own right. .
**A**: (2017); Radchenko and Mukherjee (2017); Jiang et al. **B**: (2014); Tan and Witten (2015); Chiquet et al. **C**: (2017); Panahi et al.
BCA
ABC
BCA
BCA
Selection 1
4 Experiments Experiment setup We systematically compare FTL, TF, LWFT, and our TTL on 3333 MIC tasks covering both 2D and 3D image modalities, and also explore a 2D lung segmentation task. <|MaskedSetence|> (2020a). Both models are first pretrained on the kinetics-600 dataset (Carreira et al., 2018), and then finetuned with the same setting as in Huang et al. (2020a): 0.010.010.010.01 initial LR for randomly initialized weights and 0.10.10.10.1 for pretrained weights, SGD (momentum =0.9absent0.9=0.9= 0.9) optimizer, cosine annealing LR scheduler, 100100100100 epochs of training, and best model selected based on the validation AUROC. <|MaskedSetence|> <|MaskedSetence|>
**A**: For experiments involving randomness, we repeat them 3333 times and report the mean and standard deviation. **B**: More detailed experiment setup, ablation studies, and other explorations can be found in supplementary materials. . **C**: For the 3D MIC task, we choose ResNeXt3D-101 as the default model and compare it with PENet which is a handcrafted model in Huang et al.
CAB
CAB
ABC
CAB
Selection 4
<|MaskedSetence|> This search process results in the creation of a diverse teacher network zoo, which in turn facilitates the generation of a robust ensemble of learners. <|MaskedSetence|> This transfer mechanism not only reduces inference time but also ensures that the student network maintains a high level of accuracy, thus striking an optimal balance between efficiency and performance. Lastly, the effectiveness of our approach has been thoroughly validated through rigorous experimentation on two public datasets. Notably, our method has achieved first place in both the AAPM and AIMIS challenges. <|MaskedSetence|>
**A**: The LENAS method offers several notable advantages. Firstly, the U-NAS framework enables automatic neural architecture search, allowing for the efficient exploration of a vast array of architecture configurations. **B**: By leveraging this ensemble method, we are able to enhance the overall performance of the system. Secondly, the KDA-Net enables a hierarchical transfer of distilled knowledge from the teacher networks to the student network. **C**: This outstanding performance serves as compelling evidence of the promising potential and efficacy of our proposed LENAS method..
ABC
BCA
ABC
ABC
Selection 1
<|MaskedSetence|> The goal is to focus much more on application development for which we need very large and powerful computer platforms. PISQ is independent of any of the qubit technologies but looks exclusively at the functional outcome of any quantum circuit. One abstracts away all the decoherence and quantum errors that any hardware qubits approach still has. It allows us to focus on quantum computational logic. <|MaskedSetence|> Many more simulators are being developed to execute PISQ-based algorithms, even though few people will call it PISQ. It is an opening to any industrial or societal-relevant topic for which no quantum algorithms or versions have been developed. Multiple benefits can be found when approaching the PISQ-methodology. <|MaskedSetence|> The second benefit is that new quantum gates can be defined and used. It is absolutely vital that we increase the number of quantum gates above the 20-30 that are part of the universal quantum gate set. This is also important for the micro-architecture and the hardware implication of any qubit chip..
**A**: • PISQ - Perfect (qubits) Intermediate Scale Quantum - We introduced this PISQ-label to improve the development of end-user quantum applications. **B**: PISQ is a very user-friendly way to develop quantum versions of the functional software anyone will need, once the quantum computer or accelerator will reach the market. **C**: The first is that one focuses on functionality of the quantum circuit one needs for any application, such as genetics and chemistry.
ABC
ABC
ACB
ABC
Selection 1
Another focus of this work is the study of the causes of performance drop in class-IL. <|MaskedSetence|> <|MaskedSetence|> However, the metrics defined for task-IL are not well suited to the class-IL setting and can give misleading results when applied directly to the latter. In this work, we will define similar metrics adapted to class-IL, and use them to increase understanding of the main causes of performance drop when moving to that setting. When learning in an incremental manner, we consider two types of features that can be learned by the feature extractor (see Fig. 1). A cross-task discriminative feature discriminates between classes that belong to different tasks, while an intra-task discriminative feature discriminates between classes from the same task. Furthermore, some features might appear during training that satisfy both types of discrimination. Given feature types, we postulate that replay in class-IL should fulfil multiple roles at the feature extractor level. First, to maintain previously learned intra-task discriminative features. Second, to create cross-task discriminative features capable of discriminating between classes not present in the same task. For instance, in Fig. 1, discriminating between classes of the same task only requires to learn color features (intra-task), while solving the cumulative tasks proposed in class-IL requires to also learn shape features (cross-task). Its third role is to enable knowledge transfer between the new task and previous tasks so that one task can benefit from the learning of another as it occurs when learning multiple tasks concurrently (Baxter, 2000). Failing to learn either type of feature would result in higher miss-classification rate. <|MaskedSetence|>
**A**: As in other incremental learning settings, processing the sequence of tasks in class-IL leads to a dramatic drop in performance from earlier tasks to the latest ones. **B**: In this paper, we aim to study whether these two types of features are learned properly with replay, especially cross-task features. . **C**: Many class-IL works associate catastrophic forgetting, which has been widely studied in the task-IL setting, to this performance drop.
ABC
ACB
ACB
ACB
Selection 2
In this paper, we propose a joint extraction scheme based on the dual-decoder (i.e., TR decoder and RE decoder) model. The main idea is to first detect relations from text and model them as extra features to guide the entity pair extraction. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Especially, how to enhance the accuracy of extracting candidate relations from the text is a key issue to our relation-first extraction. Note that, the accuracy issue is an open problem for both entity-first and relation-first approaches. That is, for entity-relation, we still have to consider how to ensure accuracy of extracting entities without relation information. This seems like an egg-and-chicken problem. Compared with entity-first approach, our relation-first can make a better use of such information to improve the accuracy of joint extraction. In addition, in the future work, we will pay attention to applying our model in other complex information extraction tasks for open pit mine accident prediction, such as document-level tuple extraction..
**A**: This way, is able to well solve the overlapping triple problem. **B**: Our approach is straightforward: the TR decoder detects relations from a sentence at the text level, and then the RE decoder extracts the corresponding head and tail entities for each detected relation. **C**: Experiments on both public datasets and real open pit mine datasets demonstrate that our proposed model and achieve better F1 scores under the strict evaluation metrics, especially achieving significantly better results on relational triple elements. The design of our cascade dual-decoder model still has much potential to be improved.
BAC
ABC
BAC
BAC
Selection 4
Generative Transformers in Vision Motivated by the success of GPT-3 (Brown et al., 2020), a few pilot works study image generation using Transformer by autoregressive learning (Chen et al., 2020b; Esser et al., 2021) or cross-modal learning between image and text (Ramesh et al., 2021). <|MaskedSetence|> <|MaskedSetence|> Recent work of (Hudson & Zitnick, 2021), embeds (cross-)attention module within the CNN backbone (Karras et al., 2020b) in a similar spirit to (Zhang et al., 2019). <|MaskedSetence|> Our approach is complementary to theirs as we propose key techniques for training stability within the original ViT backbone (Dosovitskiy et al., 2021). .
**A**: On the contrary, our work trains Vision Transformers in the generative adversarial training paradigm. **B**: These methods are different from ours as they model image generation as a autoregressive sequence learning problem. **C**: The closest work to ours is TransGAN (Jiang et al., 2021), presenting a GAN model based on Swin Transformer backbone (Liu et al., 2021b).
ABC
BAC
BAC
BAC
Selection 2
One of the most popular applications of the spanners is to serve as overlay networks in wireless networks [vRW04, BDS04, SS10]. In these applications, we are often interested in solving computational problems in the spanner, such as shortest path [LWF03, GZ05], independent set [Bas01, MM09], dominating sets [MM09, PCAT18], connected dominating set [YWWY13]. The existence of sublinear separators in spanners implies that we can design provably good algorithms for these problems. In graph-theoretic terms, the spanners in Theorem 2 and Theorem 3 have polynomial expansion [DN16]. <|MaskedSetence|> Thus, all of these problems admit PTAS in our spanners. However, if the graphs have weights on vertices, the algorithm of Har-Peled and Quanrud [HPQ17], which is based on local search, does not have any guarantee on the approximation ratio. <|MaskedSetence|> <|MaskedSetence|>
**A**: We remark that planar graphs and minor-free graphs on which these problems were extensively studied [Bak94, Epp00, DH05, DHK05] are special cases of graphs with polynomial expansion.. **B**: Indeed, designing a PTAS for vertex-weighted NP-hard problems in graphs with polynomial expansion remains an open problem [Dvo18]. **C**: Har-Peled and Quanrud [HPQ17] showed that many unweighted optimization problems such as independent set, vertex cover, dominating set, connected dominating set, packing problems, admit a polynomial-time approximation scheme (PTAS) in graphs with polynomial expansion.
CBA
ABC
CBA
CBA
Selection 1
The paper is organized as follows: In Section 2, we review the concepts of linear polytree SEM, Markov equivalence and CPDAG, and the polytree learning method based on the Chow-Liu algorithm. In Section 3, we give optimal sample size conditions for both the skeleton and CPDAG recovery, particularly in terms of the minimum correlation over the tree skeleton. In Section 4, we introduce a version of PC algorithm adapted to the linear polytree models, and establish the same sample size conditions. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: A brief summary of our work and some potential future research are discussed in Section 8. 2 Linear Polytree Models and Learning. **B**: Our theoretical findings are empirically demonstrated in Section 7, along with numerical results under some benchmark simulated data in the literature of DAG learning. **C**: In Section 5, we discuss a method of estimating the inverse correlation matrix for linear polytree models, and establish an upper bound of estimation in the entry-wise ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT norm.
CBA
CBA
CBA
BCA
Selection 1
Figure 8 shows quantitative measures of comparison between our inference procedure and \citetgopalan2013efficient’s stochastic variational inference algorithm. <|MaskedSetence|> The middle and right panels show boxplots of the area under the curve and relative ranks of both methods respectively. These plots were formed by subsampling 10,000 links and nonlinks and computing the probability of a link. Each of these subsamples forms an average rank measure and ROC curve which admits an area under the curve. <|MaskedSetence|> <|MaskedSetence|>
**A**: The left panel shows the ROC curves. **B**: The rank measure was formed by taking the average rank of the links according to their predictive probabilities. **C**: We see that our method performs worse with regards to these metrics. .
ABC
CBA
ABC
ABC
Selection 4
Even though transformers are special cases of neural operators, the standard attention mechanism is memory and computation intensive, as seen in Section 6, compared to neural operator architectures developed here (7)-(9). <|MaskedSetence|> Recently, efficient attention mechanisms have been explored, e.g. long-short Zhu et al. <|MaskedSetence|> <|MaskedSetence|>
**A**: (2021) and adaptive FNO-based attention mechanisms (Guibas et al., 2021). **B**: However, many of the efficient vision transformer architectures (Choromanski et al., 2020; Dosovitskiy et al., 2020) like ViTs are not special cases of neural operators since they use CNN layers to generate tokens, which are not discretization invariant. . **C**: The high computational complexity of transformers is evident is (35) since we must evaluate a nested integral of v𝑣vitalic_v for each x∈D𝑥𝐷x\in Ditalic_x ∈ italic_D.
CAB
BAC
CAB
CAB
Selection 4
In this paper, we have presented a complete OSM-based autonomous navigation pipeline for Unmanned Ground Vehicles (UGV) in unstructured outdoor environments. As a topological representation, we use road networks from OSM for global path planning. <|MaskedSetence|> At the local path planning level, we presented the novel Naive-Valley-Path method. We demonstrate how this method achieves navigation at the center of the trafficable areas, always following the shape of the road, avoiding the common deviation problems in OSM-based applications. <|MaskedSetence|> In this way, we could navigate in an exploration mode into completely unknown areas. <|MaskedSetence|>
**A**: Additionally, given its time efficiency, we show how our NVP method achieves fast and robust obstacle avoidance even in dynamic cases, such as cars and pedestrians, and how it recovers the center of the road after avoidance. As future works, we plan to use an online interface with JOSM to make a dynamic graph for the GPP. **B**: Also, we plan to research localization using OSM information, and use more sophisticated perception techniques, such a Convolutional Neural Networks (CNN), to classify obstacles and landmarks in the environment.. **C**: That demonstrates several advantages, such as global consistency and an easy map setup of autonomous navigation applications.
CAB
CAB
BCA
CAB
Selection 2
<|MaskedSetence|> This indicates the advantage of the proposed Bayesian low-rank framework. <|MaskedSetence|> <|MaskedSetence|> As we mentioned in Section 3.5, the computational cost of BKTR is substantially reduced compared to the STVC model. According to the experiments conducted, BKTR is capable of dealing with regression problems containing up to millions of coefficients. .
**A**: Thus, the model can consistently offer reliable estimation results, implying its effectiveness and usability for real-world complex spatiotemporal data analysis. Another benefit of the proposed framework is the highly improved computing efficiency. **B**: Since we introduce a fully Bayesian sampling treatment for the kernelized low-rank tensor model which is free from parameter tuning, BKTR can estimate the model parameters and hyperparameters even when only a small number of observations are available. **C**: As we can see from the test of different rank settings and observation rates in the simulation experiments (see Figures 4 and 9), BKTR is able to provide high estimation accuracy and valid CIs even with a much larger or over-specified rank, and also effectively estimates the coefficients and the unobserved output values when only 10% of the data is observed.
CBA
CBA
ABC
CBA
Selection 1
<|MaskedSetence|> <|MaskedSetence|> However, it has the great advantage of being much more efficient for incremental processing, since recomputation at each time step is avoided. The output of the recurrent LT is generally more stable for sequence classification and monotonic for tagging. <|MaskedSetence|> It is also beneficial to train such model with input prefixes, allowing it to learn more robust predictions. Acknowledgements.
**A**: With recurrent computation, the Linear Transformer (LT) has inferior non-incremental performance compared to the regular Transformer and the LT with restart-incrementality. **B**: We studied the use of Transformer encoders for incremental processing and concluded that it is possible to deploy them as incremental processors with certain trade-offs. **C**: Its non-incremental performance drop can be mitigated by introducing delay, which also improves the incremental metrics.
ACB
BAC
BAC
BAC
Selection 3
<|MaskedSetence|> Communications required by parallelization strategies are managed through collective communications (i.e., collectives). Common collectives in distributed training are illustrated in Fig. 6. The most prominent collective in distributed training is All-Reduce [19]. It is logically equivalent to Reduce-Scatter followed by an All-Gather [46]. In certain cases of TP, like embedding tables [14], All-to-All is required. <|MaskedSetence|> For example, Ring, Direct, and Recursive Halving-Doubling are commonly used All-Reduce algorithms [49]. These algorithms are topology-aware collective algorithms designed for Ring, FullyConnected, and Switch networks, respectively, ensuring that they do not introduce link contention when running on their respective physical topologies. <|MaskedSetence|> Basic collective algorithms are not ideally suited for direct use over multi-dimensional networks. For instance, the Direct collective algorithm performs well on a FullyConnected network, but the physical connectivity of multi-dimensional networks often does not meet such expectations. Consequently, heavy network contention and oversubscription over low-BW links can occur, leading to significant underutilization of network BW. To address this issue, multi-rail collective algorithms have been proposed to fully leverage the resources of multi-dimensional networks [50, 46], which is the approach adopted in Libra. A multi-rail collective algorithm capitalizes on the inherent nature of multi-dimensional networks, where basic network building blocks are stacked up. Therefore, it executes basic collective algorithms in sequence. For example, to perform an All-Reduce collective on an N𝑁Nitalic_N-dimensional network: .
**A**: Topology-aware Collective Algorithms. **B**: Real systems execute collectives using collective communication algorithms through Collective Communication Libraries (CCLs) [47, 48]. **C**: Fig. 7 lists common network building blocks and their corresponding topology-aware collective algorithms. Multi-rail Collective Algorithm.
ABC
ABC
ABC
ABC
Selection 2
<|MaskedSetence|> This means that even if a single illegitimate user participates in the authentication process, the authentication fails. <|MaskedSetence|> This means that any attacker can disrupt the service and the culprit may be unknown. In our scheme, only those who have a basis set for a fixed subspace can join the authentication or key agreement phase. Thus, an intruder with an invalid basis cannot interfere with the authentication phase or cause any trouble. Therefore, our scheme prevents a DOS attack by an attacker. Replay and Man-in-the-Middle Attack: The fact that the authentication algorithm does not require any private data sharing creates a safe environment against Man-in-The-Middle Attack. In fact, since an authentication or construction of a shared secret key is handled privately by using a public random vector, an intruder without knowledge of the predetermined subspace, can not proceed communicating with any group member. <|MaskedSetence|>
**A**: In addition to this, the group manager or members joining the authentication process can not recognize the illegitimate user and this makes such algorithms vulnerable to DOS attacks. **B**: In this section, we discuss the well-known thread models and the proposed algorithm’s resistance to these attacks. DOS Attack: In the secret sharing-based GASs, the authentication is performed only when a certain number of shares or more are available and the authentication can be completed only if all participants are legitimate. **C**: Moreover, since for each authentication session, the group publishes a different vector and a nonce, any adversary sniffing exchanged data in earlier sessions can not perform a replay attack. .
BAC
BAC
CAB
BAC
Selection 2
IV-A IF-TEM Quantization Noise In this section, we introduce an upper bound for the error resulting from time sequence quantization. Lazar and Tóth [16] established an upper bound for the reconstruction error associated with the asynchronous sigma-delta modulator (ASDM) sampler. <|MaskedSetence|> Our work, however, offers a distinct perspective. We evaluate the MSE of the IF-TEM with quantization, and compare it with the conventional ADC that employs uniform sampling and quantization. Given the distinctions in our methodological approaches, it is anticipated that our results may differ, particularly when considering the specific characteristics of the IF-TEM sampler with quantization we introduce here. <|MaskedSetence|> Yet, even if this aspect had been investigated, their conclusions, anchored in non-uniform sampling for both ASDM and traditional samplers, would remain the same. In contrast, we show that the quantization step size of the IF-TEM sampler can be decreased when the maximal frequency of a BL signal is increased or the number of pulses of an FRI signal is increased. <|MaskedSetence|>
**A**: They demonstrated that the MSE upper bound for time quantization is the same as that of amplitude quantization under conditions of non-uniform sampling for both samplers with a fixed number of bits. **B**: Note that Lazar and Tóth [16] did not explore the relationship between a signal’s energy, frequency, and maximal amplitude in the context of the time-based sampler MSE. **C**: Consequently, under specific parameter configurations, the IF-TEM sampler exhibits lower MSE bound compared to a classical ADC with equivalent bit depth. .
ABC
ABC
BCA
ABC
Selection 4
For the tri-objective DST problem, we show that there exists at least 1 solution ending in a time-out that is part of the Pareto-front. <|MaskedSetence|> <|MaskedSetence|> Unfortunately, the new acceleration-based action space makes this proof significantly more complicated than for the bi-objective environment. Following a line of reasoning similar to that used for this proof in the bi-objective case, it quickly becomes clear that this can not be proven in the same way. In the tri-objective case, a collision necessarily must be considered as consisting of multiple timesteps, since it is possible for an agent to accumulate more velocity than it can arrest in a single step. <|MaskedSetence|>
**A**: Besides this, the authors also attempt to prove whether or not a collision impacts the optimality of a solution. **B**: Proving or disproving this property would thus require a fundamentally different line of reasoning, which the authors feel that this speaks to the added complexity inherent in the tri-objective environment. . **C**: The proof for this property can be found in the supplementary material.
CAB
CAB
ACB
CAB
Selection 4
<|MaskedSetence|> We begin by defining the problem and stating our goals in Section 2. In Section 2.1, we introduce the local covariance matrix and its associated operators. A brief overview of GP regression is presented in Section 2.2. Section 2.3 describes the primary challenges and the motivation of MrGap. The denoising and interpolation phases of the algorithm are detailed in Sections 2.4 and 2.5, respectively. The selection of parameters is discussed in Section 2.6. The theoretical analysis of MrGap is provided in Section 3. Within Section 3.1, we introduce geometric preliminaries. In Section 3.2, we provide the bias and the variance analysis of the local covariance matrix. Section 3.3 focuses on constructing charts of a manifold based on the operators associated with the local covariance matrix. We relate the reconstruction of these charts to a regression problem in Section 3.4 and discuss the theoretical aspects of the algorithm based on our earlier findings in Section 3.5. Section 3.6 introduces a geometric root mean square error to evaluate the algorithm’s performance. <|MaskedSetence|> In Section 5, we apply MrGap to augment bird vocalization data. <|MaskedSetence|> Commonly used notations in this paper..
**A**: We summarize our notations in Table 1. Table 1. **B**: The remainder of the paper is structured as follows. **C**: A numerical simulation is presented in Section 4.
BCA
BCA
BCA
BAC
Selection 2
The purity of all such clusters was tested to find the composition based on the days of the week. Fig. <|MaskedSetence|> Clustering over activity using Jensen-Shannon led to a marginally higher cluster purity compared to clustering done with Euclidean. <|MaskedSetence|> <|MaskedSetence|> Each center gives us insight into what user behavior looks like on weekdays or weekends. Figure 3 shows the two cluster composition. .
**A**: We used Jensen-Shannon over Kullback-Leibler because Jensen-Shannon is a symmetrical incarnation of the same formula. Figure 2 describes the cluster centers for the two behavior mode clusters. **B**: 3 shows the activity cluster composition using two distance metrics: Euclidean and Jensen-Shannon (JS) divergence. **C**: Cluster 1 in Jensen-Shannon clustering has a higher number of weekend days compared to cluster 1 in euclidean clustering.
BCA
BCA
CAB
BCA
Selection 4
4 NeuroBack In order to reduce the computational cost of the online model inference and to make CDCL SAT solving more effective, NeuroBack employs offline model predictions on variable phases to enhance the phase selection heuristics in CDCL solvers. Figure 1 shows the overview of NeuroBack. First, it converts the input CNF formula into a compact and more learnable graph representation. Then, a well-designed GNN model trained to predict the phases of backbone variables, is applied on the converted graph representation to infer the phases of all variables. The model inference is performed only once before the SAT solving process. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: The resulting offline prediction is applied as an initialization for the SAT solving process. **B**: Finally, the enhanced SAT solver outputs the satisfiability of the input CNF formula. **C**: The key components of NeuroBack including the graph representation of CNF formulas, the GNN-based phase selection, and the phase prediction application in SAT solvers, are illustrated in the subsections below. .
ABC
ABC
BAC
ABC
Selection 2
<|MaskedSetence|> <|MaskedSetence|> [9], where each instance has 7777 players with up to 3333 followers each, and compare against the problem-specific (sequential) inner approximation algorithm (Inn-S) from Carvalho et al. [9]. <|MaskedSetence|> Large NASPs instances, such as the H7 set, are numerically badly scaled and thus helpful to perform stress tests on the numerical stability of the algorithms..
**A**: We also introduce 50505050 harder instances H7 with 7777 players with 7777 followers each. **B**: 6.2.2 Computational Tests Instance and Parameters. We set a numerical tolerance of ε=10−5𝜀superscript105\varepsilon=10^{-5}italic_ε = 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT, and consider a time limit of 300300300300 seconds. **C**: We employ the 50505050 instances InstanceSet B from Carvalho et al.
BCA
BCA
BCA
ABC
Selection 3
<|MaskedSetence|> <|MaskedSetence|> We have to choose an order in which we will sequentially open these boxes. Each time we open the next box, we learn the number in the box. <|MaskedSetence|> Or we can continue opening the boxes until the last box, in which case we have to accept this last number. The goal is to accept the largest number in these n𝑛nitalic_n boxes..
**A**: In this problem, we are given n𝑛nitalic_n boxes labeled {1,2,…,n}12…𝑛\{1,2,\ldots,n\}{ 1 , 2 , … , italic_n } by an adversary, each containing a single number chosen from an unknown distribution, where these distributions are non-identical. **B**: Then we can either stop opening the remaining boxes and accept the current number, in which case the game ends. **C**: Only recently, the problem has attracted attention from the learning communities who considered modifications in which the algorithm is given a prediction about the best among applicants [6], a prediction interval for evaluation of each applicant [41], or the objective is to rank all applicants rather than choosing best subset of them [8]. Free-order secretary. We focus on a variant of the secretary problem called the free-order secretary problem.
CAB
CAB
CAB
BCA
Selection 3
<|MaskedSetence|> Figure 9 shows the two most similar deformable prototypes (learned by the best performing Deformable ProtoPNet using 3×3333\times 33 × 3 prototypes) for each of three test images from CUB-200-2011 [47]. Figure 10 shows the two most similar deformable prototypes (learned by the best performing Deformable ProtoPNet using 2×2222\times 22 × 2 prototypes) for each of three test images from CUB-200-2011 [47]. Figure 11 shows the two most similar deformable prototypes (learned by the best performing Deformable ProtoPNet) for each of three test images from Stanford Dogs [18]. <|MaskedSetence|> <|MaskedSetence|>
**A**: In general, the most similar prototypes for a given image come from the same class as that of the image, and there is some semantic correspondence between a prototypical part and the image patch it is compared to under the spatial arrangement of the prototypical parts where the deformable prototype achieves the highest similarity across the image. . **B**: For each test image on the left, the top row shows the two most similar deformable prototypes, and the bottom row shows the spatial arrangement of the prototypical parts on the test image that produced the similarity score for the corresponding prototype. **C**: In this section, we visualize the most similar prototypes to a given test image (we call this a local analysis of the test image), for a number of test images.
CBA
CBA
ACB
CBA
Selection 2
<|MaskedSetence|> A slice of a layout corresponds to an edge cut in the dual graph that contains at most two edges of the outer face. <|MaskedSetence|> <|MaskedSetence|> We obtain a polynomial-time algorithm using a key insight: If we already know a slice, then in each subproblem, we know two corner rectangles. Our algorithm will utilize partial information about the corner rectangles. Problem Formulation. .
**A**: If a slice is one-sided, the edges in the edge cut form a star; and the edge cut is determined by its edges on the boundary of the outer face. **B**: Assume that we are given a near-triangulation G𝐺Gitalic_G (that is, a plane graph where every bounded face is a triangle). **C**: A brute force algorithm would guess a slice (i.e., edge cut), and recurse on the two subproblems; it would run in exponential time.
BAC
ACB
BAC
BAC
Selection 3
<|MaskedSetence|> We first take basic metrics for a network as initial features for our graph neural network. These metrics provide a fundamental understanding of the network, especially for synthetic networks without given features. These initial features are concatenated and fed into GAT, which employs self-attention mechanisms that allow nodes to weigh the importance of their neighbors’ features. <|MaskedSetence|> <|MaskedSetence|>
**A**: Figure 2: Overview of the FGDD framework. **B**: After selecting and removing a certain node, we update the replay memory with state, action, and reward. . **C**: We then have representation in latent space for each node and feed these representations to a reinforcement learning agent.
ACB
ACB
ACB
ACB
Selection 3
Baseline methods. Several widely-accepted approaches are adopted to conduct a fair evaluation. And they are, standard cross-correlation (SCC) Raffel et al. (2018), symmetric phase-only filter (SPOF) Wernet (2005), robust phase correlation (RPC) Eckstein and Vlachos (2009). <|MaskedSetence|> <|MaskedSetence|> In addition to subjective visual judgement, three objective metrics are also employed to quantify the performance: 1) the root mean-square-error (RMSE) Raffel et al. (2018); Lee, Yang, and Yin (2017b); Lee and Mei (2022), 2) the average endpoint error (AEE) Lagemann et al. <|MaskedSetence|>
**A**: To exclude the influence of other factors, single-pass cross-correlation without any post-processing is utilized for all testing methods. Evaluation criteria. **B**: (2021), 3) the execution time for different image size.. **C**: Regarding the background subtraction methods, we choose the minimum intensity image from double-frame PIV image Honkanen and Nobach (2005) and the spatial low-pass filter (LPF) Adrian and Westerweel (2011), resulting in SCC-MIN and SCC-LPF.
CAB
CAB
BAC
CAB
Selection 4
After various researches have been done in 1D signal and 2D image(Goodfellow et al., 2020; Kingma and Dhariwal, 2018; Kingma and Welling, 2013; Oord et al., 2016; Prenger et al., 2019; Shen et al., 2018), methods of using a neural network to generate 3D point clouds have been explored in recent years. Achlioptas et al. (Achlioptas et al., 2017) firstly adopted simple fully connected layers as the generator to encode and generate point clouds. Valsesia et al. (Valsesia et al., 2018) used dynamic graph convolution networks to enhance the generation performance. Shu et al. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> AtlasNet(Groueix et al., 2018) followed the idea of FoldingNet and further expanded the deforming operation into multiple branches. TearingNet(Pang et al., 2021) researched modeling shapes with more complex topology. PointFlow (Yang et al., 2019) modeled the point cloud generation as a distribution transformation by introducing free-form normalizing flows(Chen et al., 2018; Grathwohl et al., 2019). The above methods aim to generate point clouds from random latent codes. .
**A**: (Shu et al., 2019) proposed TreeGAN, which generates point clouds from a tree-based network. **B**: (Yang et al., 2018) proposed FoldingNet deforming 2D squares into 3D surfaces to generate point clouds. **C**: Yang et al.
ACB
ACB
BAC
ACB
Selection 4
However, managing this large amount of unstructured data involves challenges that are intricately tied to regulatory requirements. Regulatory frameworks, such as the EU rules on gender-neutral pricing in insurance and the General Data Protection Regulation (GDPR) in Europe for data privacy, have a substantial impact on how this data must be handled. Indeed, the data collected may include information that is either not compliant with GDPR regulations or not aligned with gender-neutral principles, thereby raising valid ethical concerns. Additionally, the potential for ML to propagate unfairness by replicating social and historical biases that exist within the data is also a critical concern [64]. Currently, the assessment of such data points goes through a costly labeling process done by experts, that is neither fast enough nor scalable. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Additionally, the paper explores incorporating a fairness constraint into the sampling strategies to ensure fairer outcomes. .
**A**: The objective of this paper is to tackle and resolve several of these challenges by introducing various sampling methods that, when labeled, significantly improve the predictive performance of the model compared to randomly sampling the data. **B**: Hence, actuaries must seize these new and efficient methodologies to keep and reinforce their expertise of risk evaluation. **C**: Implementing an accurate, cost-effective, and fair learning system is crucial for the insurance industry to overcome these challenges.
CBA
CBA
BAC
CBA
Selection 2
<|MaskedSetence|> <|MaskedSetence|> The homomorphic encryption-based methods exploit cryptography to mask privacy-sensitive information (e.g., system states) while still enabling the cloud to perform the MPC computation with encrypted data. In Darup et al. (2018b), homomorphic encryption is used to design a secure explicit MPC scheme for linear systems with state and input constraints. The encrypted fast gradient method and proximal gradient method are developed in Alexandru et al. (2018) and Darup et al. (2018a), respectively, to achieve implicit MPC for linear systems with input constraints. Despite strong privacy guarantees for the cloud-based MPC, the induced encryption and decryption procedures are quite computationally heavy, which is thus not suitable for systems with limited onboard resources and stringent real-time constraints. Different from the homomorphic encryption-based methods, the algebraic transformation-based approaches rely on introducing transformation maps that act as masks, rendering the real signals of a local agent indiscernible by the attacker. More specifically, the main idea of the algebraic transformation methods is to design appropriate transformation maps to protect privacy-sensitive signals and construct a different but equivalent MPC problem. Without knowing the original MPC problem, the cloud will solve the equivalent MPC problem and provide the plant with the corresponding optimal control action. By using inverse transformation maps, the plant can recover the optimal control action to the original problem. <|MaskedSetence|> For example, in Xu and Zhu (2015), non-singular matrices are utilized to produce a transformation mechanism for linear MPC in networked control system. In Xu and Zhu (2017), orthogonal matrices are combined with homomorphic encryption to design a hybrid privacy preservation scheme for output-feedback MPC. In Naseri et al. (2022), random transformations are utilized to achieve privacy preservation for set-theoretic MPC. Furthermore, isomorphisms and symmetries are adopted in Sultangazin and Tabuada (2021) as a source of transformation to protect the privacy of system signals..
**A**: As such, several privacy preservation schemes for cloud-based MPC have been proposed, which can be mainly categorized into homomorphic encryption-based methods (Schlüter and Darup, 2020; Darup et al., 2018b; Alexandru et al., 2018; Darup et al., 2018a) and algebraic transformation based methods (Xu and Zhu, 2015, 2017; Sultangazin and Tabuada, 2021; Naseri et al., 2022). **B**: This idea has been initially applied to accomplish privacy preservation in optimization (Weeraddana et al., 2013; Weeraddana and Fischione, 2017; Mangasarian, 2011; Wang et al., 2011) and then extended to cloud-based MPCs. **C**: Considering the aforementioned concerns and the growing awareness of security in cyber-physical systems, it is imperative to protect the privacy of agents if cloud-based control is used.
CAB
CAB
ABC
CAB
Selection 2
Table 1. Minimal sensitivity reporting. Significance level of 5%. The first three columns of Table 1 shows the estimates for the average treatment effect (ATE) of 401(k) eligibility on net financial assets under this conditional ignorability assumption. For these estimates, we follow the same strategy used in Chernozhukov et al. (2018a), and we estimate the ATE using DML with Random Forests, considering both a partially linear model (PLM), and a nonparametric model (NPM).101010We use Random Forest both for the outcome and treatment regression and estimate the parameters using DML with 5-fold cross-fitting. <|MaskedSetence|> <|MaskedSetence|> (2018a). As we can see, after flexibly taking into account observed confounding factors, although the estimates of the effect of 401(k) eligibility on net financial assets are substantially attenuated, they are still large, positive and statistically significant (approximately $9,000 for the PLM and $8,000 for the NPM). With the nonparametric model, we further explore heterogeneous treatment effects, by analyzing the ATE within income quartile groups. <|MaskedSetence|> We see that the ATE varies substantially across groups, with effects ranging from approximately $4,000 (first quartile) to almost $18,000 (last quartile). .
**A**: Estimates are then combined using the median as the final estimate, incorporating variation across experiments into the standard error as described in Chernozhukov et al. **B**: The results are shown in Figure 3(a). **C**: In order to reduce the variance that stems from sample splitting, we repeat the procedure 5 times.
CAB
CBA
CAB
CAB
Selection 4
Currently, we support 24 feature engineering operations abstracted into three categories: generators, selectors, and graph features. <|MaskedSetence|> <|MaskedSetence|> Graph features focus on generating graph-level features. We summarize the supported generators in Table IV, including Graphlets [147], EigenGNN [148], PageRank [149], local degree profile, normalization, one-hot degrees, and one-hot node IDs. For selectors, GBDT [150] and FilterConstant are supported. An automated feature engineering method DeepGL [151] is also supported, functioning as both a generator and a selector. <|MaskedSetence|>
**A**: The selectors automatically filter out and compress features to ensure they are compact and informative. **B**: The generators aim to create new node and edge features based on the current node features and graph structures. **C**: For graph feature, Netlsd [152] and a set of graph feature extractors implemented in NetworkX [132] are wrapped, e.g.,.
BAC
BAC
CBA
BAC
Selection 4
<|MaskedSetence|> <|MaskedSetence|> First, we would like to support conjunctive queries (with projections) instead of only join queries. Second, we would like to support partial lexicographic orders. These are specified by a permutation L𝐿Litalic_L of a subset of the non-projected variables, and the answers are ordered lexicographically by this permutation as before; the order between answers that agree on all variables in L𝐿Litalic_L is not specified as part of the problem definition. <|MaskedSetence|> More formally, we say that an order ⪯precedes-or-equals\preceq⪯ of the query answers is compatible with a partial lexicographic order if it is a refinement of that preorder..
**A**: Projections and Partial Lexicographic Orders In this section, we discuss two extensions. **B**: This way, we define a preorder on the query answers and turning it into a (complete) order is left to the discretion of the algorithm. **C**: 8.3.
CAB
BAC
CAB
CAB
Selection 1
<|MaskedSetence|> <|MaskedSetence|> CAC transforms medical coding into a knowledge-based environment, and the current role of clinical coding professionals is transformed into clinical coding editors or analysts (Morsch, 2010). However, the clinical coding editor still has the ultimate responsibility. They can reject any inappropriate clinical coding suggestions by CAC software and send a consultation letter to clinicians to clarify ambiguous or contradictory documents (Smith and Bronnert, 2010). <|MaskedSetence|>
**A**: (Campbell and Giadresco, 2020) reviewed and discussed CAC literature, and their findings indicated that CAC positively impacts coding quality and accuracy. Additionally, clinical coding personnel should view CAC as an opportunity rather than a threat. **B**: The automated clinical coding workflow must still follow the clinical coding principles and specifications. Using machine learning methods or computer-assisted clinical coding to extract practical information from EHRs and assign medical codes is the actual demand of each medical health organization.. **C**: CAC is a continuously developing technology that can improve the accuracy and quality of clinical coding and relieve the pressure on clinical coding personnel by assigning diagnostic and procedure codes from EHRs to automate clinical coding. Campbell et al.
CAB
BCA
CAB
CAB
Selection 4
In the real world, due to the variation of light intensity and the number of light sources, it is common to take images of different lightness with respect to the same scene and less likely to arouse suspicion. Because of these advantages, the lightness attack seems a promising direction to generate non-suspicious unrestricted adversarial images. Thus we propose Adversarial Lightness Attack (ALA), a novel lightness adjustment approach to generate natural adversarial images by applying and improving a differentiable filter (Hu et al., 2018) that was originally used to adjust the image attribute in image processing. <|MaskedSetence|> <|MaskedSetence|> To obtain better attack performance and image quality, we propose two improvements: ❶ unconstrained enhancement, ❷ naturalness-aware regularization. <|MaskedSetence|>
**A**: The effectiveness of ALA is verified on two popular datasets for different tasks (i.e., ImageNet for image classification and Places-365 for scene recognition). . **B**: Using a filter-based attack has three advantages: ❶ The filter is human-understandable, which can guide the lightness attack in the real world, ❷ The filter is differentiable, which is time-saving than search-based attacks (e.g., ColorFool (Shamsabadi et al., 2020b)). **C**: ❸ The filter is lightweight and resource-saving (only dozens of parameters). However, the adversarial examples generated by directly using a monotonic lightness filter achieve low attack performance and low image quality.
BCA
CAB
BCA
BCA
Selection 3
<|MaskedSetence|> We can, thus, say that our model outperforms PRAE in one-to-many mapping. We also test the impact of channel separation on the translation accuracy by training our model with visual features extracted with the regular CAE as described in Yamada et al.’s approach [30]. It is clearly indicated in Table III that using variational autoencoders instead of standard ones increases the accuracy significantly. Using PVAE with channel-separated CAE improves the results further, indicating the superiority of channel separation in our tabletop scenario. Therefore, our approach with variational autoencoders and a channel-separated CAE is superior to both PRAE and PVAE with regular visual feature extraction. In Experiment 1b, in order to test the limits of our PVAE and the impact of more data with a larger corpus, we add three more colour options for the cubes: yellow, cyan and violet. These secondary colours are combined amongst themselves for the arrangements in addition to the colour combinations used in the first experiment, i.e., a cube of a primary colour and a cube of a secondary colour do not co-occur. Therefore, this experiment has 12 arrangements. <|MaskedSetence|> <|MaskedSetence|>
**A**: As in Experiment 1a, each sentence has eight alternative ways to be described. . **B**: For Experiment 1a, our model is able to translate approximately 90% of the patterns in the test set, whilst PRAE could translate only one third of the patterns, as can be seen in Table III. **C**: Moreover, the vocabulary size is extended to 23 from 17 in Experiment 1b (two alternative words for each colour - see Table II).
BCA
BCA
BCA
ABC
Selection 2