Dataset Viewer
Auto-converted to Parquet
text
stringlengths
31
39.1k
Ultra-Wide-Band (UWB) is defined as any wireless transmission scheme that occupies a bandwidth of more than 25 % of its center frequency or greater than 500 MHz over the 3.1-10.6 GHz frequency band . Multiband Orthogonal Frequency division multiplexing (MB-OFDM) is a spectrally efficient technique proposed for high data rate, short range UWB applications. This approach uses a conventional OFDM system, combined with bit interleaved coded modulation (BICM) and frequency hopping for improved diversity and multiple access. In MB-OFDM, the channel is assumed changing so slowly that it is considered time invariant during the transmission of an entire frame. Channel estimation is performed by one known symbol (pilot) transmitted at the beginning of the information frame while the rest of the frame is decoded based on the estimated channel. Due to the limited number of pilots, the estimate of the channel is imperfect and the receiver has only access to this noisy channel estimate. However, the receiver/decoder metric for any maximum-likelihood (ML) based detector, requires knowledge of the exact channel. A standard sub-optimal technique, known as mismatched ML decoding, consists in replacing the exact channel by its estimate in the receiver metric. Hence, the resulting decoding metric is not adapted to the presence of channel estimation errors (CEE). This practical scenario, with mismatched decoding, leads to the following important questions. Firstly, in presence of CEE with a given amount of training, what are the limits of reliable transmission (the capacity). Secondly, what type of encoder/decoder is necessary to transmit reliable information close to the performance limits. The first problem has been recently addressed in , characterizing the maximal reliable information rate, by using the notion of estimation induced outage capacity. Unfortunately, the theoretical encoder/decoder used to achieve this capacity can not be implemented on practical communication systems. Besides, the mismatched ML decoding has been showed to be largely suboptimal for the considered class of channels. Basically, the transmitter and the receiver strive to construct codes for ensuring reliable communication with a quality of service (QoS), no matter what degree of channel estimation accuracy arises during the transmission. The QoS requirements stand for achieving target rates with small error probability even with very bad channel estimates. In this paper, we propose a practical decoding metric for the aforementioned mismatch scenario. This metric uses the estimated channel and the *a posteriori* pdf characterizing the channel estimation process, which matches well the channel knowledge available at the receiver. Based on the derived metric, we formulate our decoding rule for BICM MB-OFDM. Interestingly, the present metric coincides with that derived for space-time decoding from independent results in . In order to determine the limits of reliable information rates associated to the proposed metric, we use the complementary results obtained in . This allows us to compare the maximal supported rate associated to our metric with that of the classical mismatched ML and the theoretical decoder. Our results are relevant for communication systems where a prescribed quality of service (given by the outage probability) must be ensured even in the presence of imperfect channel estimation. The outline of this paper is as follows. In Section we describe the system model for MB-OFDM transmission over a frequency selective fading channel. Section presents the pilot assisted channel estimation: we specify the statistics of the CEE and then calculate the posterior distribution of the perfect channel conditioned on the estimated channel. This posterior distribution is used in section to derive the ML decoding metric in the presence of imperfect channel state information at the receiver (CSIR). In section , we use the general modified metric for soft decoding BICM MB-OFDM systems. In section , we derive the achievable outage rates of a receiver using the proposed metric. Section illustrates via simulations the performance of the proposed receiver in realistic UWB channel environments and section concludes the paper. Notational conventions are as follows : upper case bold symbols denote vectors, $\mathbf{I}_N$ represents an ($N \times N$) identity matrix; $(.)^T$ and $(.)^{\mathcal{H}}$ denote vector transpose and Hermitian transpose, respectively.
This article introduces mechanical structure and electric control of new type CNC automatic cutter.It analysis the key technologies of stepping motor constant torque division drive scheme,cutter motor control and position feedback etc..The cutter can be set to cut off strip tubes and sheets in accordance with the requirements.It can set the cut length,cut speed,with counting and function of power off memory.It is digital operation in whole process,flexible and easy to operate.The cutter is essential equipment such as heat-shrinkable etc.chemical enterprise on automatic production line.
Modulation classification is an essential step of signal processing and has been regularly applied in the field of tele-communication. Since variations of frequency with respect to time remains a vital distinction among radio signals having different modulation formats, these variations can be used for feature extraction by converting 1-D radio signals into frequency domain. In this paper, we propose a scheme for Automatic Modulation Classification (AMC) using modern architectures of Convolutional Neural Networks (CNN), through generating spectrum images of eleven different modulation types. Additionally, we perform resolution transformation of spectrograms that results up to 99.61% of computational load reduction and 8x faster conversion from the received I/Q data. This proposed AMC is implemented on CPU and GPU, to recognize digital as well as analogue signal modulation schemes on signals. The performance is evaluated on existing CNN models including SqueezeNet, Resnet-50, InceptionResnet-V2, Inception-V3, VGG-16 and Densenet-201. Best results of 91.2% are achieved in presence of AWGN and other noise impairments in the signals, stating that the transformed spectrogram-based AMC has good classification accuracy as the spectral features are highly discriminant, and CNN based models have capability to extract these high-dimensional features. The spectrograms were created under different SNRs ranging from 5 to 30db with a step size of 5db to observe the experimental results at various SNR levels. The proposed methodology is efficient to be applied in wireless communication networks for real-time applications.
Individuals tend to associate disproportionally with others who are similar to themselves . This tendency, referred to by social scientists as *homophily*, manifests itself with respect to similarities due to gender, race and ethnicity, social class background, and other sociodemographic, behavioral and intrapersonal characteristics . An effect of homophily has also been demonstrated to exist in human relationships with technology, in particular with conversational agents, where humans find ethnically congruent agents more persuasive and trustworthy (e.g. ). It is logical to hypothesize that homophily effects would also extend to human-robot interaction (HRI). However, to the best of our knowledge, no previous work addressing the hypothesis of ethnic homophily in HRI exists. One explanation for the lack of studies of homophily in HRI is the difficulty of endowing a robot with sociodemographic characteristics. While embodied conversational agents may be portraying human characters much like would be done in an animated film or a puppet theater, we contend that a robot, even with the state-of-the-art human likeness, retains a degree of its machine-like agency and its embeddedness in the real world (or, in theatric terms, a "broken fourth wall"). Expressing ethnic cues with virtual agents is not completely problem-free either. For example, behaviors that cater to stereotypes of a certain community can be found offensive to the members of the community the agent is trying to depict . We speculate that endowing non-humanlike robots with strong ethnic cues, such as appearance, may lead to similar undesirable effects. Some ethnic cues, such as the choice of speaking American English versus Arabic, can be strong, but may not be very useful for a robot that interacts with multiple interlocutors in a multicultural setting. In this work, we argue for the need to explore more subtle, behavior-based cues of ethnicity. Some of these cues are known to be present even in interactions in a foreign language (see for an overview of pragmatic transfer). For example, it is known that politeness strategies, as well as conventions of thanking, apologizing, and refusing *do* find their way into the second language of a language learner . As we will show, even such subtle behaviors can be more powerful cues of ethnicity than appearance cues, such as a robot's face. We acknowledge the integral role of behaviors by referring to the combination of a robot's appearance and behaviors that aims to express a sociodemographic identity as a *robot character*. The term *culture*, often used in related work, is "one of the most widely (mis)used and contentious concepts in the contemporary vocabulary" . To avoid ambiguity, we will outline the communities of robot users that we are considering in terms of their *ethnicity*, in particular, their native language, and, when possible, their country of residence. As pointed out in , even the concepts of a native language and mother tongue do not specify clear boundaries. Nevertheless, we will describe a multistage process where we (1) attempt to identify verbal and nonverbal behaviors that are different between native speakers of American English and native speakers of Arabic, speaking English as a foreign language, then (2) evaluate their salience as cues of ethnicity, and, finally, (3) implement the behaviors on a robot prototype with the goals of evoking ethnic attribution and homophily. First, in Sections  and we outline various sources of identifying ethnically salient behavior candidates, including qualitative studies and corpora analyses. Then, in Section , we present our approach to evaluating salience of these behaviors as ethnic cues via crowdsourcing. In Section , we describe a controlled study with an actual robot prototype that evaluates our hypotheses of the possibility of evoking ethnic attribution and homophily via verbal and nonverbal behaviors implemented on a robot. We found support for the effect of behaviors on ethnic attribution but there was no strong evidence of ethnic homophily. We discuss limitations and possible reasons for the findings in Section , and conclude in Section .
An efficient numerical FDTD repeated formulation has been devised for the study of anisotropic media. It can calculate the interior and exterior field of general anisotropic media. The formulations can be used in electromagnetic compatibility and bioelectromagnetic special effect of the interaction between electromagnetic waves and anisotropic media.
The computationally demanding virtual simulation of molecules and materials, performed to predict their physical, materials and chemical properties, has become a routine tool in the molecular and materials sciences. Current efforts geared towards computational materials and molecular design might enable one day the realization of the holy grail of automatized experimental design and discovery. Driven by the accelerating progress of compute hardware and statistical learning (artificial intelligence), first seminal examples of integrating sophisticated software and robotics to perform experimental sequences and to establish rules and trends among properties and materials, as well as their synthesis, *in realiter* have recently been introduced . However, the lofty goal of 'materials on demand' has still remained elusive, even when doing it just *in silico*. The use of empirical trends to guide experimental design has had a long tradition in the chemical sciences. Popular examples include Mendeleev's discovery of the periodic table, Hammett's relationship, Pettifor's numbering scheme, Bell-Evans-Polanyi principle, Hammond's postulate, or Pauling's covalent bond postulate . Modern systematic attempts to establish and exploit such rules in terms of quantitative structure-property relationships have led to computationally advanced bio-, chem-, and materials-informatics methodologies . Unfortunately, these methods are typically inherently limited to certain applicability domains, and do not scale due to their empirical nature . To rigorously explore the high-dimensional chemical compound space (CCS) , i.e. the combinatorially scaling number of all conceivable molecules or materials (usually defined by composition, constitution, and conformation), the quantum mechanics of electrons ought to be invoked. It is thus not surprising that *ab initio* based materials design approaches have been at the forefront for more than 20 years , and have played a major role in popularizing the use of efficient and accurate quantum methods, such as density functional theory . Sampling CCS from scratch, even when done within efficient optimization algorithms, is typically an encyclopedical endeavour by nature, and ignores many of the underlying relations among different properties and materials. Quantum machine learning models  statistically exploit such implicit correlations, hidden in the data, and have been successfully used to accelerate CCS exploration campaigns . Machine learning efficiency and transferability demonstrably benefits greatly from explicitly enforcing known relationships (e.g. translational, rotational, or atom index invariances) directly in the model construction, rather than having to learn them agnostically from data . Specific examples include explicitly imposing forces and curvatures in the loss function , spatial symmetry relations , or arbitrary differential relations . But even for the most efficient and transferable statistical models, e.g. the atom-in-molecule fragment based approach , the acquisition of training data in sufficient quantity and quality requires considerable up-front investments. In this paper, we introduce the fundamental notion of a new symmetry relation in CCS which is fully consistent with the *ab initio* view of matter , and effectively enables us to solve the inverse materials design problem in a non-empirical and highly efficient manner. Spatial symmetry considerations have been crucial for the unravelling of some of the most fundamental laws of nature and are heavily used in many fields. In *ab initio* calculations, for example, symmetry group theory arguments are common to reduce computational complexity and load. Symmetry constraints on compositional degrees of freedom would be highly desirable in order to establish general rules among distinct materials and properties, and to generally improve our understanding of chemical compound space . In analogy to conventional spatial chirality, we here define 'alchemical chirality' as a reflection plane in the space spanned by nuclear charges at fixed atomic positions as they enter the electronic Schrödinger equation. An illustrative comparison is given in Fig.  (panels A and B), for conventional enantiomers consisting of a tetra-valent carbon atom with four different substitutions and for alchemical enantiomers consisting of doubly BN doped carbon in the diamond crystal structure. Fig. C compares and relates this newly described alchemical reflection $\sigma_A$ with the conventional spatial reflection $\sigma_S$ for the same dopant pattern as in Fig. B for a single molecular skeleton. In this case, four subsequent reflections alternating between alchemical and spatial reflections return to the original molecule. While any spatial reflection leaves the molecule unchanged, an alchemical reflection affects the nuclear charges and therefore creates a different molecule as reflection image. Exchange of the dopant atom sequence in Fig. B from NBBN $\rightarrow$ BNNB is an alchemical reflection around pristine diamond: for each site, the nuclear charge difference to diamond gets inverted. As such, in the space spanned by all nuclear charges of the system, pristine diamond corresponds to a reflection center. All reflections that leave the total nuclear charge unchanged are defined by a hyperplane (cf. Figure 1D) which we refer to as the nuclear charge reference plane. In other words: Treating the change from pristine diamond to the two doped variants as a perturbation of the system Hamiltonian, there is an anti-symmetry relation between these alchemical perturbations. No other spatial symmetry operation (rotation, reflection, or inversion) can interconvert the constitutional isomers in Fig. B, thereby necessitating a fourth dimension, namely the nuclear charges. This yields 4-dimensional alchemical chirality. Note that chirality crucially depends on dimensionality, e.g. the letter L is chiral within 2 dimensions only. The chiral center of that operation in itself is a compound (diamond in Fig. B and benzene in Fig. C). Alchemical enantiomers exist only if distinct atoms can be mapped onto each other under a symmetry operation, a consequence of the reflection in nuclear charge space that defines alchemical enantiomers. This implies that the sum over all nuclear charges of alchemical enantiomers is identical (see Fig. D). If a compound has no spatial symmetry, atom sites with similar electron density derivatives $\partial^n_Z \rho$ constitute these pairs just as strictly symmetry-equivalent atoms do. Alchemical chirality requires a one-to-one correspondence of sites with opposite change in nuclear charge. At least two such pairs need to exist in a compound to obtain alchemical enantiomers which are different constitutional isomers. For a single pair of symmetry-equivalent atoms, the alchemical reflection in nuclear charge space would trivially connect two spatially symmetric compounds. In summary: *Exact alchemical enantiomers* are defined as two spatially non-superimposable, alchemically coupled, and iso-electronic compounds with the same formal charge, where each transmutating atom is assigned to exactly one subset within each of which averaging of nuclear charges results in identical chemical environments. Note that alchemical enantiomers can differ in chemical compositions, and that any given alchemical enantiomer can have as many alchemical mirror images as different valid averaged reference compounds can be defined. An alchemical enantiomer and all its possible mirror images are degenerate in the electronic energy up to 3rd order. *Approximate alchemical enantiomers* differ from their exact analogue in that the averaging results in similar, rather than identical, chemical environments for all transmutating atoms. Alchemical enantiomers have approximately the same electronic energy (see Theory section below). This symmetry is then broken by (a) the nuclear-nuclear repulsion and (b) geometry relaxation into distinct total energy minima. The nuclear repulsion will typically dominate, implying that alchemical enantiomers with spatially closer atoms of higher nuclear charge will exhibit higher total energies than their respective mirror images. While the restriction to fixed atomic positions might seem severe at first, we highlight in the following the importance and relevance of classes of fixed configurational frameworks for materials design applications. More specifically, we discuss the implications of alchemical chirality for system classes that are low dimensional in their structural degrees of freedom, and where the problem is dominated by the combinatorial scaling due to varying chemical composition. Examples abound and include any variation of graphitic motifs (studied below) prevalent in nano-technological applications, in inorganic materials such as MAX-phases, or in organic electronics, e.g. polycyclic aromatic hydrocarbon derivatives, as well as other rigid scaffolds, such as Metal-Organic-Frameworks or perovskites. All these systems would be directly amenable to alchemical chirality-based estimates (ACE) i.e. approximating the electronic energies for alchemical enantiomers as degenerate, and adding nuclear repulsions to obtain relative energies. This way, ACE enables the ranking and grouping of large subsets of materials. Within any of these classes, the number of possible materials increases combinatorially with the number of building blocks. For real-world applications, defects often need to be included in the quantum chemistry model, which further increases the chemical space under consideration. For particularly rigid frameworks and extended periodic materials, only a local and low-dimensional geometric response to a change of composition is typically observed. As such, ACE amount to a complementary means to rigorously sample compositional ensembles within a given framework class. As shown below, ACE become an enabling tool for the systematic identification of structure property relations within given compound classes that further the understanding of the respective impact of the compositional degrees of freedom in materials design. Here, we exemplify this by identifying design rules from analysis of over 400 million alchemical estimates of BN-doped derivatives in the picene framework.
This is a research project that is born from the need to apply a management tool as robust as it is Enterprise Architecture in the development of a proposed methodology for implementation of enterprise resource planning systems better known as ERP from the vendor Infor, version Ln. It appeared with the aim to highlight the importance of involving to four domains of the Enterprise Architecture, Business, Information, Application and Technology, which are present in an organization in a ERP is implementation project because it often happens that in this such projects there is understood that the responsible and involved only one is the department of technology and this one is a serious mistake that can lead even to the failure of the implementation. The consulting company, object of this descriptive study, counts with 21 years of experience on the market implementing ERP systems. All this experience has helped it to elaborate a methodology of implementation based on the best practices worldwide delivered by institutions as the PMI and Microsoft, nevertheless, is the opinion of its own collaborators and leaders that there are several opportunities of improvement in the above mentioned methodology. Enterprise Architecture with its architecture development method proposed by TOGAF (The Open Group Architecture Framework), ADM can contribute significantly to the implementation of an ERP system and the way it does is what is shown in this work.
*Kernelized bandits* , also named *Bayesian optimization* (BO) , has been an immensely popular method for various applications involving the optimization of complicated black-box reward functions. For example, BO has been extensively used to optimize the hyperparameters of machine learning (ML) models , the parameters of computationally expensive simulators , etc. In every iteration $t=1,\ldots,T$, BO chooses an arm/input $x_t$ and then queries the reward function $f$ for a noisy observation $y_t=f(x_t)+\zeta_t$ where $\zeta_t$ is a sub-Gaussian noise. In addition to its impressive practical performance, BO is also equipped with solid theoretical guarantees. A number of commonly adopted BO algorithms have been theoretically shown to enjoy sub-linear upper bounds on their *cumulative regret* , which ensures that they are guaranteed to be able to find the global optimum of the reward function as $T$ (i.e., the total number of function queries) increases. On the other hand, a lower bound of $\Omega(\sqrt{T})$ (ignoring additional log factors) on the cumulative regret of BO has been shown , which represents the fundamental limit of any BO algorithm (in the classical setting). In other words, no classical BO algorithm can achieve a cumulative regret smaller than $\Omega(\sqrt{T})$. This naturally begs the question: *can we leverage more advanced technology to go beyond the classical setting and hence break this fundamental limit of $\Omega(\sqrt{T})$?* In this work, we give an affirmative answer by showing that this can be achieved with the aid of *quantum computing* . Quantum bandits, which incorporates quantum computing into bandit algorithms, has been studied by a number of recent works . Notably, the recent work of has introduced quantum variants of classical algorithms for multi-armed bandits (MAB) and stochastic linear bandits (SLB). In the setting of quantum bandits adopted by , every query to the reward function $f$ at the arm $x_t$ (in the classical setting) is replaced by a chance to access a *quantum oracle*, which encodes the reward distribution for the arm $x_t$. For every selected arm $x_t$, has proposed to adopt the *quantum Monte Carlo* (QMC) algorithm as a subroutine to obtain an accurate estimation of $f(x_t)$ in an efficient way, i.e., using a small number of queries to the quantum oracle (Lemma ). As a result, has shown that the regrets of the quantum algorithms for both MAB and SLB are significantly improved compared with their classical counterparts and are smaller than their classical lower bounds. However, both MAB and SLB fall short when optimizing complex real-world functions (e.g., optimizing the hyperparameters of ML models). This is because MAB is unable to exploit the correlations among the arms to model the reward function, and the assumption of a linear reward function adopted by SLB is usually too restrictive in practice. Therefore, designing a quantum bandit algorithm capable of optimizing sophisticated non-linear reward functions, which is a critical step towards practically useful quantum bandit algorithms, is still an open problem. In this work, we resolve this open problem by proposing the first algorithm for quantum BO. Similar to , in our quantum BO problem setting, queries to the reward function in the classical setting are replaced by access to a quantum oracle encoding the reward distribution. As discussed in , a quantum oracle is available when the learning environment is implemented by a quantum algorithm, which makes the setting of our quantum BO fairly general. For example, our quantum BO algorithm may be used to optimize the hyperparameters of quantum ML algorithms , such as quantum support vector machines (SVMs) , quantum neural networks (NNs) , among others. It may also be used to optimize the parameters of simulators implemented on a quantum computer, or for applications involving quantum systems where the data produced is inherently quantum. Moreover, our quantum BO algorithm could also be applied to classical data and algorithms, because as discussed in , any classical computer program can be converted to a quantum circuit, allowing it to serve as a quantum oracle in our quantum BO algorithm. In this work, we introduce the first quantum BO algorithm: *quantum-Gaussian process-upper confidence bound* (Q-GP-UCB). For every selected arm $x_s$, our Q-GP-UCB algorithm (Algo. ) adopts the QMC subroutine (Lemma ) to efficiently estimate the corresponding reward function value $f(x_s)$ to satisfy a target estimation error $\epsilon_s$. Every arm $x_s$ is selected based on our *weighted* GP posterior distribution, in which every previously selected arm $x_s$ is given a weight of $1/\epsilon_s^2$ which is inversely related to its estimation error. The theoretical analysis of our Q-GP-UCB is faced with non-trivial technical challenges, and our contributions can be summarized as follows:
The theory of graph minors, developed over the span of over 20 years by Robertson and Seymour, had a tremendous impact on the area of graph algorithms. Arguably, one of the cornerstone contributions is the notion of *treewidth*  and the deep understanding of obstacles to small treewidth, primarily in the form of the *excluded grid theorem* . Very tight relations of treewidth and the size of the largest grid as a minor in sparse graph classes, such as planar graphs or graphs excluding a fixed graph as a minor, led to the rich and fruitful theory of bidimensionality . In general graphs, fine understanding of the existence of well-behaved highly-connected structures (not necessarily grids) in graphs of high treewidth has been crucial to the development of efficient approximation algorithms for the <span class="smallcaps">Disjoint Paths</span> problem . In undirected graphs, one of the first theorems that gave some well-behaved structure in a graph that is in some sense highly connected is the famous Erdős-Pósa theorem  linking the feedback vertex set number of a graph (the minimum number of vertices one needs to delete to obtain an acyclic graph) and the cycle packing number (the maximum possible size of a family of vertex-disjoint cycles in a graph). The Erdős-Pósa theorem states that a graph that does not contain a family of $k$ vertex-disjoint cycles has feedback vertex set number bounded by ${\mathcal{O}}(k \log k)$. A similar statement for directed graphs, asserting that a directed graph without a family of $k$ vertex-disjoint cycles has feedback vertex set number at most $f(k)$, has been long known as the Younger's conjecture until finally proven by Reed, Robertson, Seymour, and Thomas in 1996 . However, the function $f$ obtained in  is not elementary; in particular, the proof relies on the Ramsey theorem for $\Theta(k)$-regular hypergraphs. This is in contrast with the (tight) $\Theta(k \log k)$ bound in undirected graphs. Our main result is that if one compares the feedback vertex set number of a directed graph to the *quarter-integral* and *half-integral* cycle packing number (i.e., the maximum size of a family of cycles in $G$ such that every vertex lies on at most four resp. two cycles), one obtains a polynomial bound. **Theorem 1**. * Let $G$ be a directed graph that does not contain a family of $k$ cycles such that every vertex in $G$ is contained in at most $p$ cycles.* We remark that if one relaxes the condition even further to a *fractional cycle packing*, [^7] Seymour  proved that a directed graph without a fractional cycle packing of size at least $k$ admits a feedback vertex set of size ${\mathcal{O}}(k \log k \log \log k)$. *Directed treewidth* is a directed analog of the successful notion of treewidth, introduced in . An analog of the excluded grid theorem for directed graphs has been conjectured by Johnson, Roberston, Seymour, and Thomas  in 2001 and finally proven by Kawarabayashi and Kreutzer in 2015 . Similarly as in the case of the directed Erdős-Pósa property, the relation between the directed treewidth of a graph and a largest directed grid as a minor in  is not elementary. For a directed graph $G$, let $\mathop{\mathrm{fvs}}(G)$, $\mathop{\mathrm{dtw}}(G)$, and $\mathop{\mathrm{cp}}(G)$ denote the feedback vertex set number, directed treewidth, and the cycle packing number of $G$, respectively. The following lemma is a restatement of the result of Amiri, Kawarabayashi, Kreutzer, and Wollan : **Lemma 2** (). * Let $G$ be a directed graph with $\mathop{\mathrm{dtw}}(G) \leq w$. For each strongly connected directed graph $H$, the graph $G$ has either $k$ disjoint copies of $H$ as a topological minor, or contains a set $T$ of at most $k \cdot (w+1)$ vertices such that $H$ is not a topological minor of $G-T$.* Note that the authors of  prove Lemma  for both topological and butterfly minors, but the previous restatement is sufficient for our purposes. By taking $H$ as the directed 2-cycle it is easy to derive the following bound: **Lemma 3**. * For a directed graph $G$ it holds that $$\mathop{\mathrm{fvs}}(G)\le(\mathop{\mathrm{dtw}}(G)+1)(\mathop{\mathrm{cp}}(G)+1).$$* In the light of Lemma  and since a directed grid minor of size $k$ contains $k$ vertex-disjoint cycles, the directed grid theorem of Kawarabayashi and Kreutzer  is a generalization of the directed Erdős-Pósa property due to Reed, Robertson, Seymour, and Thomas . Theorem  is a direct corollary of Lemma  and the following statement that we prove. **Theorem 4**. * Let $G$ be a directed graph that does not contain a family of $k$ cycles such that every vertex in $G$ is contained in at most $p$ cycles.* Furthermore, if one asks not for a cycle packing, but a packing of subgraphs of large directed treewidth, we prove the following packing result. **Theorem 5**. * There exists an absolute constant $c$ with the following property. For every pair of positive integers $a$ and $b$, and every directed graph $G$ of directed treewidth at least $c\cdot a^6 \cdot b^8 \cdot \log^2(ab)$, there are directed graphs $G_1,G_2,\ldots,G_a$ with the following properties:* Note that by setting $b=2$ in Theorem , one obtains the case $p=4$ of Theorem  with a slightly weaker bound of ${\mathcal{O}}(k^6 \log^2 k)$ and, consequently, case $p=4$ of Theorem  with a weaker bound of ${\mathcal{O}}(k^7 \log^2 k)$. Theorem  should be compared to its undirected analog of Chekuri and Chuzhoy  that asserts that in an undirected graph $G$ of treewidth at least $c \min (ab^2, a^3b)$ one can find $a$ vertex-disjoint subgraphs of treewidth at least $b$. While we still obtain a polynomial bound, we can only prove the existence of a quarter-integral (as opposed to integral, i.e., vertex-disjoint) packing of subgraphs of high directed treewidth. In the <span class="smallcaps">Disjoint Paths</span> problem, given a graph $G$ and a set of terminal pairs $(s_i,t_i)_{i=1}^k$, we ask to find an as large as possible collection of vertex-disjoint paths such that every path in the collection connects some $s_i$ with $t_i$. Let $\mathrm{OPT}$ be the number of paths in the optimum solution; we say that a family $\mathcal{P}$ is a *congestion-$c$ polylogarithmic approximation* if every path in $\mathcal{P}$ connects a distinct pair $(s_i,t_i)$, each vertex of $V(G)$ is contained in at most $c$ paths of $\mathcal{P}$, and $|\mathcal{P}| \geq \mathrm{OPT} / \mathrm{polylog}(\mathrm{OPT})$. The successful line of research of approximation algorithms for the <span class="smallcaps">Disjoint Paths</span> problem in undirected graphs leading in particular to a congestion-2 polylogarithmic approximation algorithm of Chuzhoy and Li  for the edge-disjoint version, would not be possible without a fine understanding of well-behaved well-connected structures in a graph of high treewidth. Of central importance to such *routing* algorithms is the notion of a *crossbar*: a crossbar of order $k$ and congestion $c$ is a subgraph $C$ of $G$ with an *interface* $I \subseteq V(C)$ of size $k$ such that for every matching $M$ on $I$, one can connect the endpoints of the matching edges with paths in $C$ such that every vertex is in at most $c$ paths. Most of the known approximation algorithms for <span class="smallcaps">Disjoint Paths</span> find a crossbar $(C,I)$ with a large set of disjoint paths between $I$ and the set of terminals $s_i$ and $t_i$. While one usually does not control how the paths connect the terminals $s_i$ and $t_i$ to interface vertices of $I$, the ability of the crossbar to connect *any* given matching on the interface leads to a solution. To obtain a polylogarithmic approximation algorithm, one needs the order of the crossbar to be comparable to the number of terminal pairs, which — by well-known tools such as *well-linked decompositions*  — is of the order of the treewidth of the graph. At the same time, we usually allow constant congestion (every vertex can appear in a constant number of paths of the solution, instead of just one). Thus, the milestone graph-theoretic result used in approximation algorithms for <span class="smallcaps">Disjoint Paths</span> is the existence of a congestion-2 crossbar of order $k$ in a graph of treewidth $\Omega(k \mathrm{polylog}(k))$. While the existence of similar results for the general <span class="smallcaps">Disjoint Paths</span> problem in directed graphs is implausible , Chekuri, and Ene proposed to study the case of *symmetric demands* where one asks for a path from $s_i$ to $t_i$ and a path from $t_i$ to $s_i$ for a terminal pair $(s_i,t_i)$. First, they provided an analog of the well-linked decomposition for this case , and then with Pilipczuk  showed the existence of an analog of a crossbar and a resulting approximation algorithm for <span class="smallcaps">Disjoint Paths</span> with symmetric demands in planar directed graphs. Later, this result has been lifted to arbitrary proper minor-closed graph classes . However, the general case remains widely open. As discussed above, for applications in approximation algorithms for <span class="smallcaps">Disjoint Paths</span>, it is absolutely essential to squeeze as much as possible from the bound linking directed treewidth of a graph with the order of the crossbar, while the final congestion is of secondary importance (but we would like it to be a small constant). We think of Theorem  as a step in this direction: sacrificing integral packings for quarter-integral ones, we obtain much stronger bounds than the non-elementary bounds of . Furthermore, such a step seems necessary, as it is hard to imagine a crossbar of order $k$ that would not contain a constant-congestion (i.e., every vertex might be used in a constant number of cycles) packing of $\Omega(k)$ directed cycles. On the technical side, the proof of Theorem  borrows a number of technical tools from the recent work of Hatzel, Kawarabayashi, and Kreutzer that proved polynomial bounds for the directed grid minor theorem in planar graphs . We follow their general approach to obtain a directed treewidth sparsifier  and modify it in a number of places for our goal. The main novelty comes in different handling of the case when two linkages intersect a lot. Here we introduce a new partitioning tool (see Section ) which we use in the crucial moment where we separate subgraphs $G_i$ from each other.
The purpose of this article is to simplify the way of learning and applying the Denavit - Hartenberg (D-H) steps forcalculating the Direct Kinematics of an industrial robot. The application described has the advantage of significantlysimplify the mathematic calculation process that is performed in order to find the direct kinematic of that robot. Whenentering the D-H parameters (, d, a y ) into the program, this has the capacity to show previous results to the T matrix(Homogeneous Transformation Matrix). The management and use of the application is intuitive for the people who havesome knowledge in engineering.
A new computational approach for electrical analysis of biological tissues. A new computational approach for electrical analysis especially designed for application in biological tissues is presented. It is based on the modelling of the electrical properties of the medium by means of lumped circuit elements, such as capacitance, conductance and current sources. The cell scale model is suitable for modelling the local anisotropy around cell membranes. It permits to obtain the electric potential, ionic concentrations and current densities around cells in time steps in an iterative process. The tissue scale model utilises volume-averaged values of conductivity and permittivity and models suitably the dispersive characteristic of biological tissues. It permits to obtain potential and current distributions in large volumes of tissue in the time or frequency domain. An example of analysis of skeletal muscle is presented aiming to demonstrate the features of the method.
Forest plays a pivotal role in maintaining the ecological balance. It is necessary to detect the changes in forest cover as the forests have a significant role in promoting carbon cycle. Remote sensing domain has shown a promising potential for monitoring forest degradation. However, the problem arising due to missing satellite images in temporal domain and problems due to artefacts such as clouds need to be addressed. To detect the changes in the forest area, an index for mapping forest cover known as Normalized Difference Fraction Index (NDFI) has been used. NDFI is calculated for three satellite images (Landsat7, Landsat8, and Sentinal2) and for the fusion of all these satellite images. Following this, the missing image is predicted by applying regression methods and the best regression method was identified. For change detection problem, optimal values for Convolution Neural Network (CNN) parameters were obtained using the Genetic Algorithm (GA). Later, various filters were applied for the optimal CNN and best filter was identified.
Multi-agent reinforcement learning (MARL) is a useful paradigm for determining optimal policies in sequential decision making tasks involving a group of agents. MARL has been applied successfully in several contexts, including sensor networks , team robotics , and video games . MARL owes this success in part to recent developments in better function approximators such as deep neural networks . Many works on MARL focus on the case where agents can directly observe the global state of the environment. However, in many scenarios, agents can only receive partial information about the state. The decentralized partially observable Markov decision process (Dec-POMDP) framework is applicable to these types of situations. However, a large body of MARL work assumes that Dec-POMDPs observe data that are deterministic and known functions of the underlying state, which is not the case in general. Consider, for example, robots that receive noisy observations from their sensors. The underlying observation model is stochastic in this case. Under stochastic observation models, one common strategy is to keep track of the posterior distribution (belief) over the set of states, which is known to be a sufficient statistic of the history of the system . For single agents, this posterior distribution can be obtained at each iteration with the optimal Bayesian filtering recursion . Unfortunately, for multi-agent systems, forming this global posterior belief requires aggregation of all data from across all agents in general. The agents can form it in a distributed manner only when they have access to the private information from other agents in the network. And even when agents have access to this level of global knowledge, the computational complexity of forming the global posterior distribution is known to be NP-hard in addition to its large memory requirements. Moreover, obtaining beliefs necessitates significant knowledge about the underlying model of the environment, which is generally not available in practice. Therefore, instead of forming beliefs, most MARL algorithms resort to a model-free and end-to-end approach where agents try to simultaneously learn a policy and an embedding of the history that can replace the beliefs (e.g., recurrent neural networks (RNNs)). Nevertheless, recent empirical works suggest that this model-free approach can be sub-optimal when the underlying signals of the environment are too weak to train a model such as RNN . Moreover, RNNs (or alternative machine learning models) are usually treated as black boxes. In other words, these algorithms lack model interpretability, which is critical for trustworthy systems (see ). Furthermore, even though end-to-end approaches have shown remarkable performance empirically, they are still based on heuristics and lack theoretical guarantees on their performance. Compared to modular approaches, they are inefficient in terms of adaptability and generalization to similar tasks. As an alternative, there is a recent interest towards improving belief-based MARL approaches . These works have focused on emulating conventional beliefs with generative models, or with models learned from action/observation trajectories (in a supervised fashion). In this paper, we also examine belief-based strategies for MARL. In particular, we are interested in the multi-agent policy evaluation problem. Our work complements in the sense that we assume that agents are already capable of forming *local* beliefs with sufficient model knowledge or with generative models. Our focus is on the challenge of approximating the *global* Bayesian posterior in a *distributed* manner. **Contributions**. - We consider a Dec-POMDP framework in which agents only know their local observations, actions, and rewards but they are allowed to communicate with their immediate neighbors over a graph. In the proposed strategy (Algorithm ), agents exchange both their belief and value function estimates. - We show in Theorem  that by exchanging beliefs, agents keep a bounded disagreement with the global posterior distribution, which requires fusing all observations and actions. Also, exchanging value function parameters enables agents to cluster around the network centroid for sufficiently small learning rates (Theorem ). Furthermore, we prove that the network centroid attains a bounded difference with a strategy that requires centralized training (Theorem ). - By means of simulations, we confirm that agents attain a small mean-square distance from the network centroid. Moreover, the squared Bellman error (SBE) averaged over the network is shown to be comparable to the SBE of the centralized strategy. **Paper Organization**. In Sec. , we present additional related work. In Sec. , for ease of exposition and introducing notation, we describe the problem in single-agent setting. In Sec. , we propose algorithms for multi-agent policy evaluation. Sec.  includes the theoretical results, and Sec.  includes numerical simulations.
Studies on the narrowband characterization of the on-body propagation channel show variations of up to 40dB in link loss for different antenna position and body posture. Channels on the body trunk show small variance of the path gain probability density function, whilst those involving the arms or legs a much greater variance. For most cases the monopole antenna located with the ground plane parallel to the body surface give least loss. For ultra wideband channels, path delay is highest for non line of sight links around the body and delay spreads are generally less than 10nsec. Link modeling using a free space assumption gives results within 10dB, but for more accurate modeling full body modeling is required and an example is given using locally distorted non-orthogonal FD-TD.
Cloud database environments are extremely fascinating for the distribution of massive application extent because of their exceedingly adaptable and accessible framework.
In many applications, there is a delay constraint on the data to be transmitted via a wireless link. These applications range from the most basic voice communication to the more demanding multimedia streaming. However, due to its broadcast nature, the wireless channel is vulnerable to eavesdropping and other security threats. Therefore, it is of critical importance to find techniques to combat these security attacks while satisfying the delay limitation imposed by the Quality of Service (QoS) constraints. This motivates our analysis of the fundamental information theoretic limits of secure transmission over fading channels subject to strict deadlines. Recent works on information theoretic security have been largely motivated by Wyner's wire-tap channel model . In his seminal work, Wyner proved the achievability of non-zero secrecy capacity, assuming that the wiretapper channel is a degraded version of the main one, by exploiting the noise to create an advantage for the legitimate receiver. The effect of fading on the secrecy capacity was further studied in  in the ergodic setting. The main insight offered by this work is that one can *opportunistically* exploit the fading to achieve a non-zero secrecy capacity even if the eavesdropper channel is better than the legitimate receiver channel, on the average. Delay limited transmission over fading channels has been well studied in different network settings and using various traffic models. For example, in , the delay limited capacity notion was introduced and the optimal power control policies were characterized in several interesting scenarios. In , the strict delay limitation of  was relaxed by allowing for buffering the packets at the transmitter. In this setup, the asymptotic behavior of the power-delay trade-off curve was characterized yielding valuable insights on the structure of the optimal resource allocation strategies . More recently, the scheduling problem of data transmission over a finite delay horizon assuming perfect CSI was considered in . Our work can be viewed as a generalization of  where a secrecy constraint is imposed on the problem. The extension to the bursty traffic scenario is currently under investigation. The delay limited transmission of secure data over fading channels was considered previously in . In this work, the authors attempted to send the secure information using binning techniques inspired by the wiretap channel results. The drawback of this approach is that it fails to secure the information in the particular instants where the eavesdropper channel gain is larger than that of the main channel resulting in the so-called **secrecy outage** phenomenon (as defined in ). Unfortunately, in the delay limited setting, the secrecy outage can not made to vanish by increasing the block length leading to the conclusion that the delay limited rate achieved by this approach is equal to zero for most channel distributions of interest . This obstacle is overcome by our two-stage approach. Here, the delay sensitive data of the current block is secured via Vernam's one time pad approach , which was proved to achieve perfect secrecy by Shannon , where the legitimate nodes agree on the private key during the previous blocks. Since the key packets are **not delay sensitive**, the two nodes can share the key by distributing its bits over many fading realizations to capitalize on the ergodic behavior of the channel. Through the appropriate rate allocation, the key bits can be **superimposed** on the delay sensitive data packets so that they can be used for securing future packets. This is referred as key renewal process in the sequel. This process requires an *initialization phase* to share the key needed for securing the first data packets. However, the loss in the secrecy entailed by the initialization overhead vanishes in the asymptotic limit of a large number of data packets. Our analytical results establish the asymptotic optimality, with high SNR, of this novel approach in the scenario where both the main and eavesdropper channel gains are known *a-priori* at the transmitter (for a wide class of channel distributions). When only the main channel CSI is available, this approach is shown to achieve **a non-zero constant** secrecy rate for a wide class of *quasi-static channels* (i.e., the class of invertible channels ). The rest of the paper is organized as follows. Section II details the system model and the notations used throughout the rest of the paper. In Section III, our main results for both the full and main CSI cases are obtained. Finally, Section IV concludes the paper.
A general billing method of NGN is analyzed and presented on the basis of comprehensively considering business type (data, voice, video, etc.), call type (local, district, longdistance, etc.), billing mode (based on time-period or traffic), QoS(Quality of Service)level, date and time. The method uses the above-mentioned factors as variables, with a group of formulas to solve the unified billing problems on NGN multi-business effectively. KeywordsBilling, Telecom, NGN, PSTN,OSS
Let $G=(V,E)$ be a simple graph on the vertex set $V(G)=\{v_1,v_2,\ldots,v_n\}$ and edge set $E$. The *adjacency matrix* of $G$ is an $n$ by $n$ matrix $A(G)$ whose $(i,j)$-th entry is $1$ if vertices $v_i$ and $v_j$ are adjacent and $0$, otherwise. The *spectrum* of $G$ is the multi-set of eigenvalues of $A(G)$. Two graphs $G$ and $G'$ are called *cospectral* if they share the same spectrum. We say $G$ is *determined by spectrum* (*DS* for short) if it has no non-isomorphic cospectral mate. The problem of constructing cospectral graphs, has been investigated by some authors. For a survey of results on this area we refer the reader to . In the authors have used the concept $m$-cospectrality to construct new cospectral graphs. Haemers et all in have considered Godsil-McKay switching method to construct non-isomorphic cospectral graphs see the paper for more details. In this article we use the concept lift of graphs to construct new non-isomorphic cospectral graphs from given small cospectral pairs of graphs.
Due to the essential roles played in quantum communication and quantum information processing, quantum entanglement has been the subject of many recent studies in recent years. The study of quantum entanglement from various viewpoints has been a very active area and has led to many impressive results. As the key resources, quantum entanglement has been used many quantum communication protocols such as superdense coding , quantum teleportation , quantum cryptography , remote-state preparation , and quantum computational tasks such as the one-way quantum computer . As one of the fundamental differences between quantum and classical correlations, an essential property of entanglement is that a quantum system entangled with one of other subsystems limits its entanglement with the remaining ones. The monogamy relations give rise to the distribution of entanglement in the multipartite quantum systems. Moreover, the monogamy property has emerged as the ingredient in the security analysis of quantum key distribution . The monogamy inequalities are important relations satisfied by the multipartite quantum entanglement. In the classical scenario, the fact that two systems sharing some correlations does not prevent them from being correlated with a third party. Nevertheless, two maximally entangled quantum systems cannot be entangled with a third one. Generally, the more the entanglement between two systems, the less the entanglement with the rest systems. For a tripartite system $A$, $B$ and $C$, the usual monogamy of an entanglement measure $\mathcal{E}$ implies that the entanglement between $A$ and $BC$ satisfies $\mathcal{E}_{A|BC}\geq \mathcal{E}_{AB} +\mathcal{E}_{AC}$. However, such monogamy relations are not always satisfied by all entanglement measures for all quantum states. In fact, it has been shown that the squared concurrence $C^2$ and entanglement of formation $E^2$ satisfy the monogamy relations for multi-qubit states. The monogamy inequality was further generalized to various entanglement measures such as continuous-variable entanglement , squashed entanglement , entanglement negativity , Tsallis-q entanglement , and Renyi-entanglement . Monogamy relations characterize the distributions of quantum correlations in multipartite systems and play a crucial role in the security of quantum cryptography. Tighter monogamy relations imply finer characterizations of the quantum correlation distributions, which tightens the security bounds in quantum cryptography . In this paper, we provide a finer characterization of multiqubit entanglement in terms of concurrence. By using the Hamming weight of the binary vectors related to the subsystems, we establish a class of monogamy inequalities for multiqubit entanglement based on the $\alpha$th power of concurrence for $\alpha\geq 2$. For $0\leq \beta\leq2$, we establish a class of polygamy inequalities for multiqubit entanglement in terms of the $\beta$th power of concurrence and concurrence of assistance. We further show that our class of monogamy and polygamy inequalities hold in a tighter way than those provided before. Then we give the monogamy and polygamy inequalities for general quantum correlations, which can applied to SCREN, entanglement of formation and Tsallis-$q$ entanglement, and give rise to tighter inequalities than the existing ones for some classes of quantum states, or to monogamy relations with less constraints on quantum states. Moreover, we take SCREN as an example and show the advantage of the general monogamy and polygamy inequalities. We also show that our monogamy inequalities still valid for the counterexamples of the tangle-based monogamy inequality, where at least one local dimension is larger than two.
What kinds of innovation networks could generate the most efficient innovation? To answer this question,this study establishes a simulation model for innovation generation process,constructs a set of networks including 112 different representative structures,and compares the efficiency of these networks.Our innovation model is able to distinguish two different innovations,including explorative and exploitative innovations.The efficiency of innovation is measured by both time and cost.The result shows that complete graph is the most efficient structure both in time and cost for explorative innovation.In contrast,for exploitative innovation star is the most efficient in time,but very inefficient in cost.Circle is the most efficient in cost,but a little behind star in time efficiency.Thus,for organizations that are pursuing explorative innovation complete graph is the optimistic choice.For organizations that are pursuing exploitative innovation,star is the most fast one.However,when cost is also considered circle is the better choice.These results derive three main causal elements:CPP,bottlenecks and turbulence of conversations.As a structural feature,CPP represents the number of conversations that could be held at the same time on the network.As CPP increases,the number of conversations in every period increases and knowledge can be distributed faster.Bottleneck is resulted by high centrality of some nodes.The existence of bottlenecks blocks knowledge flow and diminishes innovation efficiency.These two elements have the same influence on both explorative and exploitative innovations,but turbulence of conversations is different.High turbulence could make explorative innovation much more efficient,but exploitative innovation is much less efficient.For explorative innovation,complete graph satisfies all of these three conditions.For exploitative innovation,circle network is the closest one.We also consider the conditions with restricted CPP organizations that are restricted by cost rate,such as very busy companies,and their employees have little time to talk freely.When thecost per period becomes an external constraint,high CPP is not available.Organizations that pursue explorative innovations should increase conversation turbulence and eliminate bottlenecks.Thus,the ideal core-periphery graph is the best choice under different CPP.Organizations that pursue exploitative innovations should decrease conversation turbulence and eliminate bottlenecks.Therefore,the improved symmetrical axes star networks are the best choice under different CPP.This study provides a new theoretical model to simulate innovation process and has practical significance on the design of innovative organizations.
Python is widely used due to its flexibility and the abundance of third-party libraries (e.g., web and machine learning frameworks). However, flexibility brings challenges to code optimization and also makes it error-prone. Variable type inconsistency is a common error in dynamic languages. Due to Python's dynamic property, the interpreter cannot check type inconsistency as a static programming language compiler (e.g., Go or Rust). Python type checkers  take advantage of annotations to detect type inconsistencies. These tools primarily rely on manually written type annotations by developers, which are expensive to provide. To facilitate user programming and checking type errors, variable type inference is a necessary step. Deep learning has been applied to infer types for JavaScript  by leveraging TypeScript  to generate large corpus with precise annotations. However, there exist few good solutions for Python because of its broad scope of dynamic features and extensive dependencies on third-party libraries, which left us many opportunities in this scope. The quality of the dataset itself brings a large gap between annotating Python and JavaScript. Type inference tools that apply static analysis or dynamic analysis  do not require labeled annotations for type inference. However, they are imprecise and leave out the abundant natural language semantics in source code. proposed to use a probabilistic model to infer variable types by leveraging type hints from data flow, attributes, subtypes, and variable names. However, it takes considerable time to analyze the source code and solve the probabilistic constraints. Without a sufficiently large dataset to provide enough signals, the performance of the probabilistic model is also limited. Existing human-labeled type annotations in mypy  and typeshed  only cover few annotations for function arguments and return types. The dataset contains no variable annotation, and is inadequate for inferring Python variable types. TypeWriter  also targets type inference for Python, but it addresses the problem of inferring function arguments and return types. Experiments show that TypeWriter is insufficient on variable type prediction. The function level annotations (function arguments and return types) are useful to serve as an API contract for IDEs, while variable level annotations can be used to provide type check for each variable. In this paper, we present PYInfer, a deep learning based approach to generate type annotations for Python. A high-level overview of PYInfer is depicted in Fig. . Since the human-labeled dataset for variables is not available, we first employ a static analyzer, PySonar2 , to automatically generate initial annotations from top-star Python GitHub projects. We then apply a series of data cleaning techniques to refine the quality of our dataset. We further feed the annotations and contextual information to train a deep neural network, which ranks each type with probabilities effectively. We highlight that fusing deep learning with static analysis to infer type annotations is promising. By combining deep learning with static analysis from end to end, our approach is capable of analyzing code semantics with well-developed natural language processing (NLP) techniques. PYInfer's effectiveness benefits from addressing the following challenges: - **Annotation Dataset Collection.** Analyzing contextual source code semantics for variable types demands a large annotated dataset. However, there exists no well-acknowledged large dataset with annotations. We generate our annotated dataset with enriched data based on inference results from PySonar2 and perform data cleaning to enhance quality, which itself is a significant contribution, because high quality labeled data is critical for deep learning. Our dataset is collected from 4,577 popular Python projects on GitHub with 54,928,617 Lines Of Code (LoC) from 320,402 source files. It contains 77,089,946 annotations, which is large enough for most Python types research. - **User-defined Types.** Due to the flexibility of Python, types can be user-defined and changed during runtime. We frame Python type inference as a classification task to cover user-defined types in our 500 most common types. We investigate the performance of 11 basic types compared with 500 types. Our model achieves 91.187% accuracy on classifying 11 basic types and 81.195% on predicting 500 types. This is a significant advance over past work  on this problem. As a classification task, our model provides confidence levels for each type. It achieves 97.677% precise with the 0.9 threshold on the confidence level. - **Source Code Embeddings.** Source code contains abundant semantic information and type hints in variable names and usages, which is helpful for type inference. Previous work  has applied word embeddings for type inference. However, we show that these embeddings do not work well due to the Out-Of-Vocabulary (OOV) issue caused by the large number of dynamic features and user-defined types in Python. To tackle this problem, we employ the Byte Pair Encoding (BPE)  algorithm. It provides sufficient signals to analyze semantics in variable names and contextual data. Compared with graph-based embeddings in LambdaNet , BPE embeddings are lightweight and can be easily extended to analyze other languages. We demonstrate that BPE is effective in inferring variable types. Our model improves 27.1% accuracy with BPE embeddings over GloVe embeddings . - **Contextual Code Semantics.** A key insight in our approach is leveraging contextual code semantics for variable type inference. We hypothesize that the context within a certain margin conveys relevant semantic information to characterize the variable. Inspired by interprocedural static analysis , our approach is capable of analyzing the semantics of variables together with the structural syntax and grammar information. The setting of the margin hyperparameter is illustrated in Fig. . For each variable, we collect source code tokens within its contextual scope. We adopt the Gated Recurrent Unit (GRU)  with the attention mechanism  to analyze contextual semantics. Our ablation test on contextual information show 41.0% improvement on accuracy. Our evaluation on human-labeled typeshed  dataset demonstrates the same result. The contextual information provides local semantics for variables, and it is useful for deriving variable annotations. Putting all these contributions together, we develop an end-to-end, highly effective and efficient framework to infer variable types for Python statically. Our dataset is large enough for most research under Python types, which itself is a novel contribution. We achieve the accuracy of 91.187% on 11 basic types, and 81.195% on 500 most common types. PYInfer demonstrates superiority in both coverage and time efficiency on large projects compared to existing work . Instead of assigning weights to multiple factors for probabilistic inference , PYInfer achieves 5.2X more coverage and is 187.4X faster. Our model annotates a variable on an average of 1 millisecond. Compared with PySonar2, our tool takes a similar time on analysis but generates 5X more annotations. A motivating example comparing PYInfer and PySonar2 is provided in Section . Although trained on the annotations generated by PySonar2, PYInfer can handle sophisticated cases using contextual code semantics, as shown in Fig. . PYInfer can also be extended to perform function argument inference. It shows superiority over TypeWriter on inferring variable types and function arguments. We have released our PYInfer model, source code, and dataset to facilitate further research[^1]. Existing type checkers can benefit from the type annotations generated by PYInfer to detect type inconsistencies. We provide a workflow on integrating PYInfer with pyre to detect variable inconsistencies in Python repositories. As an end-to-end static type annotator, PYInfer provides annotations with probabilities in 1 ms for a variable. It provides the type annotation for programmers seamlessly while they are programming. Our framework can also be used to infer argument types. Since our approach relies on high-level semantics rather than graph structures, it can be easily extended into annotating variables and detecting semantic errors in other dynamically typed languages.
Based on the given references,the second order full expansion ETG finite element method is developed,and combined with the large eddy simulation(LES)technique to simulate the flow round two square cylinders arranged side by side at Re =10,000.The objective of this method is to capture large scale eddies and the variation of eddies with time.Consequently,the time domain process of pressure at the given points and the analysis of their spectrum are completed.It is concluded the pressure process at the given symmetric points on the surface of the two cylinders with symmetrical boundary condotions are asymmetric,but the differences are focused on the band of higher frequency with little energy.
This chapter shows that at every level of the main hierarchy, there are maximal degrees (Theorem 4.1). Thus, for example, there are maximal degrees with respect to not bounding a critical triple, namely, maximal totally ω‎-c.a. degrees. Since the totally ω‎-c.a. degrees are naturally definable, one obtains a naturally definable antichain in the c.e. degrees. The only previously known such antichain consisted of the maximal contiguous degrees. On the other hand, the chapter demonstrates (Theorem 4.12) that maximality cannot go too far, that is, to the next level. It also investigates bounding by maximal degrees. For example, there are totally ω‎-c.a. degrees bounded by no such maximal degrees.
A classical area of extremal combinatorics is Ramsey theory. Introduced by Ramsey and popularized by Erdős and Szekeres , the Ramsey number of a graph $G$, commonly denoted by $r(G)$, is the smallest $n$ so that every edge bicoloring of the complete graph $K_n$ contains a monochromatic copy of $G$. Shrinking the sizable gap between the asymptotic upper/lower bounds on $r(K_n)$ has been a major open problem for decades, spurring extensive work on a plethora of related questions in Ramsey theory. One variant of Ramsey numbers which has recently received attention is the analogue for ordered graphs. An *ordered graph* on $[n]$ is a graph on $n$ vertices which are given distinct labels in $\{1,\dots,n\}$. Given an ordered graph $G$, the *ordered Ramsey number* of $G$, denoted by $r_<(G)$, is the smallest $n$ so that every edge bicoloring of the ordered complete graph on $n$ vertices contains a monochromatic copy of $G$ which preserves the relative vertex ordering of $G$. As with the unordered case, one can define the *off-diagonal* ordered Ramsey number of two graphs $G$ and $H$, denoted by $r_<(G,H)$, as the smallest $n$ so that every edge bicoloring of the ordered complete graph on $n$ vertices contains either an order preserving red copy of $G$ or an order preserving blue copy of $H$. The first systematic studies of ordered Ramsey numbers were conducted by Conlon, Fox, Lee, and Sudakov and by Balko, Cibulka, Král, and Kynčl . However, as pointed out by the authors of , a number of classic results in extremal combinatorics can be reinterpreted as statements about ordered Ramsey numbers. For instance, Erdős and Szekeres proved that every sequence of at least $(n-1)^2 + 1$ distinct numbers contains either an increasing subsequence of length $n$ or a decreasing subsequence of length $n$. This result is implied by the bound $r_<(P_n, K_n) \leq (n-1)^2 + 1$, where $P_n$ is the $n$-vertex path imbued with the natural monotonic ordering: for any sequence of $n$ distinct numbers $x_1,\dots,x_n$, color $(i,j)$ red if $x_i < x_j$ and blue otherwise. Perhaps the simplest nontrivial family of ordered graphs from the perspective of ordered Ramsey theory is *matchings*, in which every vertex has degree $1$. Conlon, Fox, Lee, and Sudakov provide a number of bounds for general matchings, for matchings satisfying certain properties, and for off-diagonal ordered Ramsey numbers involving matchings. Relevant to this paper is their work on bounding the largest possible value of $r_<(M, K_3)$, where $M$ is a matching. They have the following result: **Theorem 1** (Conlon, Fox, Lee, and Sudakov ). *There are positive constants $c_1$ and $c_2$ such that for all even positive integers $n$, $$c_1 \left ( \frac{n}{\log n} \right)^{4/3} \leq \max_{M} \, r_<(M, K_3) \leq c_2 \frac{n^2}{\log n}$$ where the maximum is taken over all ordered matchings $M$ on $n$ vertices.* The upper bound in this theorem is in some sense trivial. Since every graph on $n$ vertices embeds in the complete graph $K_n$, and the ordered Ramsey number $r_<(K_n, K_3)$ is equal to the Ramsey number $r(n, 3)$, which has been asymptotically determined to be $\Theta(n^2/\log n)$, it follows (as pointed out in ) that $r_<(M, K_3) = O(n^2/\log n)$ for a matching $M$ on $n$ vertices. However, this bound does not make use of any properties of matching graphs, only making use of the fact that every graph on $n$ vertices can be embedded in $K_n$. For this reason and perhaps other reasons, Conlon, Fox, Lee, and Sudakov hypothesize that the upper bound can be improved to $r_<(M, K_3) \leq n^{2-\epsilon}$ for some $\epsilon > 0$. We contribute two results in the direction of this conjecture. We first look at the special case of ordered matchings where the edges do not cross. That is, for any two edges $(i,j)$ and $(k,l)$ with $i<j$ and $k<l$, the intervals $[i,j]$ and $[k,l]$ are either disjoint or nested one inside the other. We call the matchings which satisfy this condition "parenthesis matchings", after the useful fact that these matchings correspond with balanced parenthesis sequences. Indeed, it is this correspondence which partially motivates our proof of the following theorem. *Theorem 1*. For any $\epsilon > 0$ there is a constant $c$ such that every parenthesis matching $M$ on $n$ vertices has $$r_<(M, K_3) \leq cn^{1+\epsilon}.$$ To state our second result, we must define the *interval chromatic number* of an ordered graph. Analogous to the chromatic number of an unordered graph, the interval chromatic number $\chi_<(G)$ of a graph $G$ is the minimum number of contiguous intervals into which the vertex set must be split so that each interval is an independent set in $G$. Conlon, Fox, Lee, and Sudakov present a number of general results accompanied by much stronger specific results for matchings with small interval chromatic number . In a similar spirit, we prove a sub-quadratic bound on $r_<(M, K_3)$ for random matchings with interval chromatic number $2$. *Theorem 1*. There is a constant $c$ such that for every even $n$, if an ordered matching $M$ on $n$ vertices with interval chromatic number $2$ is picked uniformly at random, then $$r_<(M, K_3) \leq cn^{\frac{24}{13}}$$ with high probability. Observe that the statement is not probabilistic over bicolorings; rather, it is a true Ramsey-type result which applies to almost all matchings.
We consider a general mesh network with multiple traffic streams subject to window flow control on a per hop, per stream basis. Scheduling at each server is governed by "service curve" requirements. We establish lower bounds on the window sizes such that each stream receives pre-specified service guarantees.
Collapse is a common cartographic generalization operation in multi-scale representation and cascade updating of vector spatial data. During transformation from large- to small-scale, the dual-line river shows progressive collapse from narrow river segment to line. The demand for vector spatial data with various scales is increasing; however, research on the progressive collapse of dual-line rivers is lacking. Therefore, we proposed a progressive collapse method based on vector spatial data. First, based on the skeleton graph of the dual-line river, the narrow and normal river segments are preliminarily segmented by calculating the width of the river. Second, combined with the rules of cartographic generalization, the collapse and exaggeration priority strategies are formulated to determine the handling mode of the river segment. Finally, based on the two strategies, progressive collapse of dual-line rivers is realized by collapse and exaggeration of the river segment. Experimental results demonstrated that the progressive collapse results of the proposed method were scale-driven, and the collapse part had no burr and topology problems, whereas the remaining part was clearly visible. The proposed method can be better applied to progressive collapse of the dual-line river through qualitative and quantitative evaluation with another progressive collapse method.
and statement of the main results Edge metrics of complete and incomplete type provide interesting examples of geometries that include asymptotically hyperbolic, asymptotically cylindrical and conic spaces. Incomplete and complete edge metrics behave differently from a geometric perspective. However, these metrics lend themselves to a parallel approach when one wants to do constructions of an analytic flavour. For example, to analyze asymptotics of solutions to the heat equation near spatial infinity in the complete case, one must introduce a geometric compactification. The resulting complete edge metrics, like their incomplete cousins, give rise to differential operators that are degenerate or singular in a sufficiently controlled manner that generalization of classical results is possible. In particular, we consider the Laplacian on manifolds with either type of edge metric and establish regularity properties of solutions to the heat equation. We study the cases in parallel, reviewing and extending previous work by Jeffres and Loya for conic and b-metrics . We begin by defining these metrics precisely. Let $\overline{M}$ be an $m$-dimensional compact manifold with boundary $\partial M$, where $\partial M$ is the total space of a fibration $\phi: \partial M \to B$, and the fibre $F$ and base $B$ are closed manifolds. We consider a defining function $x:C^{\infty}(\overline{M})\to \R^+\cup \{0\}$ of the boundary $\partial M$ with $x^{-1}(0)=\partial M$ and $dx\neq 0$ on $\partial M$. Using the integral curves of $\textup{grad} (x)$ we identify a collar neighbourhood $U\subset \overline{M}$ of the boundary $\partial M$ with $[0,1)\times \partial M$, where $\partial M$ is identified with $\{0\}\times \partial M$. We will refer to either of these metrics as an edge metric. As the names imply, incomplete or complete edge metrics are incomplete or complete as Riemannian metrics. In the complete case $\partial M$ is at infinite distance from any point in the interior of $\overline{M}$; note that the two types of edge metrics are conformal in the interior of $\overline{M}$. Examples of complete edge metrics include asymptotically hyperbolic (conformally compact or $0$-metrics) and asymptotically cylindrical (b-metrics) metrics, with $\dim F = 0$ and $\dim B = 0$ respectively. The product metric on $\bH^{b+1} \times \bS^f$ provides an example of a complete edge metric with trivial fibration structure. Examples of incomplete edge metrics include conic metrics and conformal compactifications of asymptotically hyperbolic metrics. We consider a slightly restricted class of edge metrics, requiring that $\phi: (\partial M, g^F + \phi^*g^B) \to (B, g^B)$ be a Riemannian submersion. Recall that if $p\in \partial M$, then $T_p\partial M$ splits into vertical and horizontal subspaces as $T^V_p \partial M \oplus T^H_p \partial M$, where $T^V_p\partial M$ is the tangent space to the fibre of $\phi$ through $p$ and $T^H_p \partial M$ is the orthogonal complement of this subspace. The new condition on $g_0$ implies that the restriction of the tensor $g^F$ to $T^H_p \partial M$ vanishes. Moreover, in the case of incomplete edge metrics we need to assume that the Laplacians associated to $g^F$ at each $b\in B$ are isospectral. We summarize these additional conditions in the definition below. Our perspective for establishing regularity for solutions to the heat equation is as follows: the mapping properties of the corresponding heat operator are encoded in the asymptotic behaviour of the heat kernel on an appropriate blowup of the heat space, as established by Mazzeo and the third author for incomplete and by Albin for complete edge metrics. The regularity is discussed in terms of spaces with bounded edge derivatives and appropriate Hölder spaces, which take into account the underlying singular or asymptotic geometry. To make this precise we introduce the notion of edge vector fields $\mathcal{V}_e$. The space $\mathcal{V}_e$ is closed under the ordinary Lie bracket of vector fields, hence defines a Lie algebra. In local coordinates $\mathcal{V}_e$ can be described as follows. Let $y=(y_1,...,y_{b}),b=\dim B$ be the local coordinates on $B$ lifted to $\partial M$ and then extended inwards. Let $z=(z_1,...,z_f),f=\dim F$ restrict to local coordinates on $F$ along each fibre of $\partial M$. Then $(x,y,z)$ are the local coordinates on $\overline{M}$ near the boundary and the edge vector fields $\mathcal{V}_e$ are locally generated by $$\left\{x\frac{\partial}{\partial x}, x\frac{\partial}{\partial y_1}, ..., x \frac{\partial}{\partial y_b}, \frac{\partial}{\partial z_1},..., \frac{\partial}{\partial z_f}\right\}.$$ In the case of incomplete edge metrics it also makes sense to restrict the Banach space of continuous functions to those which are fibrewise constant at $x=0$. This is precisely the space of continuous functions $\mathscr{C}^0_e(M,g)$ with respect to the topology on $M$ induced by the Riemannian metric $g$. The corresponding space of continuous $k$-times edge-differentiable functions shall be denoted by $\mathscr{C}^k_e(M,g)$, with $$\mathscr{C}^k_e(M,g):=\{u \in C^k_e(M,g) \mid V_1\cdots V_j u \in \mathscr{C}^0_e(M,g) \ \textup{for any} \ V_i \in \mathcal{V}_e \},$$ which is a Banach subspace of $C^k_e(M,g)$. For the Hölder space with fractional differentiability we set $\mathscr{C}^{\A}_e(M,g):=C^{\A}_e(M,g)$. Note that similar considerations are possible in the setup of complete edge metrics, but in that case these do not lead to a refinement of the regularity statement. Our estimates with respect to these spaces are as follows, where a precise definition of the heat operator is given in §. For complete edges the result is similar, although the heat operator acts between weighted edge spaces in this case. A function $u$ is in a weighted edge space, denoted $u \in x^w C^k_e(M,g)$, if and only if $u = x^w v$, with $v \in C^k_e(M,g)$. An immediate observation from these estimates is that after taking into account the geometry of the underlying space, the mapping properties of the heat operator resemble the well-known behaviour on compact manifolds. As a particular consequence of our results, we establish short-time existence for solutions to certain semilinear parabolic equations on manifolds with edge metrics. Applications of such equations, including the reaction-diffusion equation, may be found in . Our discussion of the heat operator in the edge setup reviews and generalizes the work of Jeffres and Loya in and , where the case $\dim B=0$ was considered. Their work was based on the heat kernel analysis by Mooers in the incomplete (conical) case and by Melrose in the complete (b-cylindrical) setup. The analysis of the heat operator in the presence of incomplete singularities was initiated by Cheeger , with major contributions by Brüning and Seeley , , Lesch , Melrose and Mazzeo , to select a few. Related questions on regularity properties of solutions to parabolic equations in the singular setup have been studied in , and . Most recently, a study of the inhomogeneous Cauchy problem on manifolds with incomplete conical metrics has been presented by Behrndt . The paper is organized as follows. In § we review the asymptotic properties of the heat kernel as a polyhomogeneous distribution on the appropriate blowup of the heat space. We discuss the incomplete and complete cases separately since the asymptotic properties are different. In § we apply the asymptotics of the heat kernel to derive the mapping properties of the heat operator and hence the regularity of solutions to the heat equation, carefully estimating the corresponding integral in various regions of the heat-space blowup. Finally, in § we explain how these mapping properties yield short-time existence for solutions to certain semilinear parabolic equations. *Acknowledgements:* It is a pleasure for the authors to acknowledge helpful discussions with Pierre Albin, Paul Loya and Rafe Mazzeo. The first author appreciates the hospitality of the Bucknell University Mathematics Department under the auspices of the Distinguished Visiting Professor program during the preparation of this paper. The first and second authors are grateful to the Mathematical Sciences Research Institute for giving them the opportunity to be exposed to this area during the Fall 2008 program "Analysis of Singular Spaces." The third author gratefully acknowledges financial support by the German Research Foundation DFG as well as by the Hausdorff Center for Mathematics in Bonn, and also thanks Stanford University for its hospitality.
The BioASQ Challenge[^1] consists of various tasks related to biomedical semantic indexing and question answering . Our participation in BioASQ for 2018 focused on Task B Phase B, where our system attempted to find the ideal answer given a question and a collection of relevant snippets of text. We approached this task as an instance of query-based multi-document summarisation, where the ideal answer is the summary to produce. The BioASQ challenge focuses on a restricted domain, namely biomedical literature. Nevertheless, the techniques developed for our system were domain-agnostic and can be applied to any domain, provided that the domain has enough training data and a specialised corpus large enough to train word embeddings. We were interested in exploring the use of deep learning and reinforcement learning for this task. Thus, Section  explains our experiments using deep learning techniques. Section  details our experiments using reinforcement learning. Section  specifies the settings used in the experiments. Section  shows and discusses the results, and Section  concludes the paper.
Underwater robot picking means using robots to grab mariculture organisms like seacucumber, seaurchin, or scallop in an open-sea farm automatically where underwater object detection is the first and key step. In recent years, due to the superior feature representation ability of deep CNNs and the availability of large datasets (MS COCO ), general object detection achieved remarkable success. However, less progress has been made in underwater object detection, because there are still some tough challenges which mainly come from 3 aspects:
Young's lattice is a prototypical example of a differential poset which was first defined by Stanley . The Robinson correspondence is a correspondence between permutations and pairs of standard tableaux whose shapes are the same Young diagram. This correspondence was generalized for differential posets or dual graphs (generalizations of differential posets ) by Fomin . (See also .) Young's lattice also has The Robinson-Schensted-Knuth correspondence, the correspondence between certain matrices and pairs of semi-standard tableaux. Fomin introduced operators called generalized Schur operators, and generalized the Robinson-Schensted-Knuth correspondence for generalized Schur operators. We define a generalization of Schur polynomials as expansion coefficients of generalized Schur operators. A complete symmetric polynomial is a Schur polynomial associated with a Young diagram consisting of only one row. Schur polynomials satisfy Pieri's formula, the formula describing the product of a complete symmetric polynomial and a Schur polynomial as a sum of Schur polynomials$:$ $$\begin{gathered} h_i(t_1,\ldots,t_n)s_\lambda(t_1,\ldots,t_n)=\sum_\mu s_\mu(t_1,\ldots,t_n), \end{gathered}$$ where the sum is over all $\mu$'s that are obtained from $\lambda$ by adding $i$ boxes, with no two in the same column, $h_i$ is the $i$-th complete symmetric polynomial, and $s_\lambda$ is the Schur polynomial associated with $\lambda$. In this paper, we generalize Pieri's formula to generalized Schur polynomials. *Remark 1*. Lam introduced a generalization of the Boson-Fermion correspondence . In the paper, he also showed Pieri's and Cauchy's formulae for some families of symmetric functions in the context of Heisenberg algebras. Some important families of symmetric functions, e.g., Schur functions, Hall-Littlewood polynomials, Macdonald polynomials and so on, are examples of them. He proved Pieri's formula using essentially the same method as the one in this paper. Since the assumptions of generalized Schur operators are less than those of Heisenberg algebras, our polynomials are more general than his; e.g., some of our polynomials are not symmetric. An example of generalized Schur operators which provides non-symmetric polynomials is in Section . See also Remark for the relation between and this paper. This paper is organized as follows: In Section $\ref{GSdefsec}$, we recall generalized Schur operators, and define generalized Schur polynomials. We also define a generalization of complete symmetric polynomials, called weighted complete symmetric polynomials, in Section $\ref{WCdefsec}$. In Section $\ref{mainsec}$, we show Pieri's formula for these polynomials (Theorem $\ref{genepieri}$). We also see that Theorem $\ref{genepieri}$ becomes simple for special parameters, and that weighted complete symmetric polynomials are written as linear combinations of generalized Schur polynomials in a special case. Other examples are shown in Section $\ref{EXsec}$.
As the wireless cellular networks evolve towards the 5G generation, it is expected that the number of Base Stations (BSs) per area will noticeably increase , leading to *network densification*. The availability of multiple proximate and interconnected BSs leads to the usage of cooperative transmission/reception techniques, commonly referred to as *Coordinated Multi-Point (CoMP)* . Motivated by these recent trends, a transmission scheme for serving bidirectional traffic simultaneously via spatially separated HD-BSs was investigated in . The scheme emulates Full Duplex (FD) operation using two interconnected HD-BSs, and is termed *CoMPflex*: CoMP for In-Band Wireless Full-Duplex. In the initial work, the performance was analyzed through a simplified one-dimensional Wyner-type deployment model. CoMPflex can be seen as a generalization of FD, where a FD BS corresponds to CoMPflex with interconnection distance zero. We show that the nonzero separation distance in CoMPflex brings two benefits: (i) The distance between a BS and its associated Mobile Stations (MSs) decreases; and (ii) the distance between two interfering MSs increases. This translates into improved transmission success probability in uplink (UL) and downlink (DL). The use of in-band FD wireless transceivers  has recently received significant attention. However, due to the high transceiver complexity, FD is currently only feasible at the network infrastructure side  and the MSs keep the HD transceiver mode. An in-band FD BS can serve one UL and one DL MSs simultaneously, on the same frequency. Other approaches to FD emulation by HD devices have been studied in the literature, such as having the transmissions in UL and DL (partially) overlap in time. That is, the UL and DL time slots, which conventionally should take place at separate time intervals, are now overlapping, an approach used by the Rapid On-Off Division Duplexing (RODD) in . The authors of consider the physical UL and DL channels themselves, and have them overlap. Compared to these approaches, CoMPflex takes advantage of the spatial dimensions in a cellular network. In this paper, we treat CoMPflex in a two-dimensional scenario, with planar deployment of interconnected HD-BSs. Following the trend in the literature for modeling spatial randomness of network nodes, we analyze the performance using the tools of stochastic geometry. Stochastic geometry has been used in many papers in the literature to model the placement of network nodes, including FD capable ones as in . The setup of CoMPflex is shown in Fig. , where one HD-BS working in the UL cooperates via a wired connection (double solid line) with another HD-BS that operates in the DL. The solid arrows indicate signals; the dashed arrows interference. Two interfering cells are also shown, and the boundaries between them are indicated by dotted lines. Using the interconnection link, the interference from the DL-BS to the UL-BS is perfectly canceled. Note that, for the sake of clarity, not all interference from neighboring cells is shown. The rest of this paper is structured as follows. In Sec. , we describe the system model, deployment assumptions and MS association, as well as the FD baseline scheme. The signal model and interference characteristics are given in Sec. . In Sec. , we derive and prove the expression of the success probability for CoMPflex in UL and DL, and explain the approximations used. The numerical results are provided and discussed in Sec. , where the effects of the CoMPflex scheme on signal and interference link distances is shown. The paper is concluded in Sec. .
The Enigma Machine is a complex electromechanical device used by the Germans in World War II to achieve what they thought was complete communications security. While the original machine weighed over 20 lbs, the central mechanics of the machine can be simulated manually by manipulating strips of paper. A Paper Enigma is presented that can be cut out of a single sheet of paper. The resulting simulator is compatible with the electromechanical original in that messages can be encoded on one, and decoded on the other. Copies of The Paper Enigma can be downloaded from http://mckoss.com/crypto/enigma.htm.
Suppose the president of a bank, Alice, wants to give access to a vault to two vice-presidents, Bob and Charlie, whom she does not entirely trust. Instead of giving the combination to any one of them, she may desire to distribute the information in such a way that no vice-president alone has any knowledge of the combination, but both of them can jointly determine the combination. Cryptography provides the answer to this question in the form of *secret sharing* . In this scheme, some sensitive data is distributed among a number of parties such that certain authorized sets of parties can access the data, but no other combination of players. A particularly symmetric variety of secret splitting (sharing) is called a *threshold scheme*: in a $(k,n)$ classical threshold scheme (CTS), the secret is split up into $n$ pieces (shares), of which any $k$ shares form a set authorized to reconstruct the secret, while any set of $k-1$ or fewer shares has no information about the secret. Blakely and Shamir showed that CTS's exist for all values of $k$ and $n$ with $n \geq k$. By concatenating threshold schemes, one can construct arbitrary access structures, subject only to the condition of monotonicity (ie., sets containing authorized sets should also be authorized) . Hillery *et al.* and Karlsson *et al.* proposed methods for implementing CTSs that use *quantum* information to transmit shares securely in the presence of eavesdroppers. Subsequently, extending the above idea to the quantum case, Cleve, Gottesman and Lo , using the notion of quantum erasure correction , presented a $(k,n)$ *quantum* threshold scheme (QTS) as a method to split up an unknown secret quantum state $|S\rangle$ into $n$ pieces (shares) with the restriction that $k > n/2$– this inequality being needed to ensure that no two disjoint sets of players should be able to reconstruct the secret, in conformance with the quantum no-cloning theorem . QSS has been extended beyond QTS to general access structures , but here none of the authorized sets shall be mutually disjoint: given a QSS access structure $\Gamma = \{\alpha_1,\cdots,\alpha_r\}$ over $N$ players, the no-cloning restriction entails that: $$\label{noklo} \alpha_j \cap \alpha_k \ne \phi~~~~~\forall j,k.$$ Potential applications of QSS include creating joint checking accounts containing quantum money , or sharing hard-to-create ancilla states , or performing secure distributed quantum computation . A tri-qubit QSS scheme has recently been implemented . The chances of practical implementation of QSS are improved by employing equivalent schemes that maximize the proportion of classical information processing . It has been shown that quantum teleportation and entanglement swapping may be used to implement an $((n,n))$-threshold scheme. The requirement Eq. () places a restriction quite unnatural to applications, where we may more likely expect to find groups of people with mutual trust within the group, and hardly any outside it. First aspect of our present work is aimed at studying a way to overcome this limitation. In particular, in the Sections and we show that allowing the dealer to withhold a small number of shares permits arbitrary access structures to be acceptable, subject only to monotonicity. This modified scheme we call "assisted QSS" (AQSS), the shares withheld by the dealer being called "home shares". While more general than conventional QSS, AQSS is clearly not as general as classical secret sharing, since it requires shares given to the (non-dealer) players, called "player shares", to be combined with the home shares for reconstructing the secret. Inspite of this limitation, the modified scheme can be useful in some applications of secret sharing, in particular, those in which the secret dealer is by definition a trusted party and where reconstruction of the secret effectively occurs by re-convergence of shares at the dealer's station. Further, it could be useful for schemes like circular QSS . We note that the home share by themselves give no information. In the bank example above, access is allowed by the bank vault (which can be thought of effectively as the dealer, acting as the bank president's proxy) if the secret reconstructed from the vice-presidents' shares is the required password. The locker thus effectively serves as both the dealer and site of secret reconstruction. In AQSS, the player shares are combined with the home share(s) to reconstruct the secret. Clearly, this leads to no loss of generality in this type of QSS. Where the secret dealer is not necessarily trusted, such as in multi-party secure computation (MPSC), AQSS may be less useful, though here again only a more detailed study can tell whether MPSC cannot be turned into a suitable variant of AQSS. It is assumed that all the $n$ (quantum) shares are somehow divided among the $N$ players. In an AQSS scheme, $m < n$ shares are allowed to remain with the share dealer, as home shares. In order that AQSS should depart minimally from conventional QSS, we further require that the number of home shares should be *the minimum possible* such that a violation of Eq. () can be accomodated. Thus, a conventional QSS access structure like $\Gamma = \{ABC, ADE, BDF\}$, which as such conforms to the no-cloning theorem, will require no *share assistance*. A conventional QSS scheme is a special case of AQSS, in which the set of home shares is empty. We prove by direct construction that, by allowing for non-zero home shares, the restriction () does not apply to AQSS. Therefore, with share assistance, the only restriction on the access structure $\Gamma$ in AQSS is monotonicity, as with classical secret sharing. Such constructions are described in details in the Sections and . Another cryptographic primitive where (multipartite) entanglement can be effectively used is that of quantum key distribution (QKD) and its generalization to $n$ parties ($n$-QKD). Note that the $n$-QKD involves sharing a random key amongst $n$ *trustworthy* parties unlike QSS which splits quantum information among *untrustworthy* parties. Naturally, it would be an interesting extension to consider the situations where some kind of mutual trust may be present between sets of parties while parties being individually mistrustful wherein it might be possible to combine the essential features of QKD and QSS. In the Section , we discuss one such extension. We consider the problem of secure key distribution between two trustful groups where the invidual group members may be mistrustful. The two groups retrieve the secure key string, only if all members should cooperate with one another in each group. That is, how the two groups one of size $k$ and the other of size $n-k$ may share an identical secret key among themselves while an evesdropper may co-operate with several (of course not all) dishonest members from any of the groups. If $k=1$, the result is equivalent to a $(n-1,n-1)$ threshold secret sharing scheme. The QKD-QSS connection is manifest in several earlier works . Ref. first introduced the idea of using a GHZ to implement a three-party secret sharing protocol. Ref. extend a method of QKD with reusable entanglement to QSS. Deng et al. extend the ping-pong QKD protocol to a three-party circular QSS scheme. Ref. presents a security for a $(n,n)$ scheme involving on $n$-partite entangled states, based on the violation of Bell's inequalities, even when the $n$-qubit correlations are weak. In contrast, we employ only bipartite states. From the theoretical perspective, this will provide the simplicity that we can build the our protocol on top of QKD, which will help us reduce the security of our scheme to that of QKD. From a practical perspective, multipartite states employed for QSS above has exponentially low efficiency even in the noiseless scheme, since only rounds in which all participants measure the same observable, $\sigma_x$ or $\sigma_y$, are retained, with all other measurement possibilities discarded. In contrast, in our protocol, the key generation step will involve a measurement by all parties in the diagonal basis, so that no waste bits are produced through incompatible measurements by the various parties.
A micro-controller-based technology has been developed for monitoring and controlling the water quality and quantity in dam reservoirs by using various sensors. This system is able to automatically detect and measure the changes in water and turbidity levels of incoming water for hydropower production. In this project, an Arduino UNO micro-controller and GSM Technology control the operations of the system through sending messages and regulating automatic water valves according to the instant status of the dam water. The developed prototype has four units: sensing unit, processing unit, displaying unit
This paper describes a new methodology for noninvasive, objective, and automated assessment of yield in vineyards using image analysis and Boolean model. Image analysis, as an inexpensive and noninvasive procedure, has been studied for this purpose, but the effect of occlusions from the cluster or other organs of the vine has an impact that diminishes the quality of the results. To reduce the influence of the occlusions in the estimation, the number of berries was assessed using the Boolean model. To evaluate the methodology, three different datasets were studied: cluster images, manually acquired vine images, and vine images captured on-the-go using a quad. The proposed algorithm estimated the number of berries in cluster images with a root mean square error (RMSE) of 20 and a coefficient of determination (R2) of 0.80. Vine images manually taken were evaluated, providing 310 grams of mean error and R2=0.81. Finally, images captured using a quad equipped with artificial light and automatic camera triggering were also analysed. The estimation obtained applying the Boolean model had 610 grams of mean error per segment (three vines) and R2=0.78. The reliability against occlusions and segmentation errors of the Boolean model makes it ideal for vineyard yield estimation. Its application greatly improved the results when compared to a simpler estimator based on the relationship between cluster area and weight.
The invention relates to an operation method for a dual-SIM (Subscriber Identity Module) card cell phone, comprising the following steps of: S1, defining a standby screen of the cell phone to be with two different backgrounds; S2, sliding left and right with a touch screen in a touch screen cell phone or pressing a left key and a right key on a keyed cell phone by a user so as to realize the operation interface switching between the two different backgrounds, wherein the operation interfaces of different backgrounds correspond to different SIM cards; and S3, selecting different screen backgrounds by the user as required so as to enter the operation interface corresponding to the SIM card. By implementing the invention, a user can simply and conveniently differentiate different SIM cards in the cell phone so as to prevent confusion, therefore, user time is saved and user operation is convenient.
A video signal processing circuit is disclosed and belongs to the medical apparatus technology field. The structure of the video signal processing circuit consists of an OPA circuit LM6643 (1), a line-field separation circuit LM 1881 (2), an analog to digital conversion circuit AD 9984 (3), an FPGA circuit (4), an digital analog conversion circuit ADV7123 (5), a DDR circuit (6), and a one-chip microcomputer circuit (7). The video signal processing circuit is characterized in that after a video signal is subjected to isolating and noise filtering operation through the OPA circuit LM6643 (1), a line-field signal is separated from the video signal through the line-field separation circuit LM 1881 (2), and the video signal is converted into a digital signal through the analog to digital conversion circuit AD 9984 (3); after the signal is processed by the FPGA circuit (4), the signal is output to a monitor device through the digital analog conversion circuit ADV7123 (5); the DDR circuit (6) is a peripheral storage circuit of the FPGA circuit (4); and the analog to digital conversion circuit AD 9984 (3) is set by the one-chip microcomputer circuit (7) through which video data can be subjected to manipulations such as mirror image operation, noise filtering operation, turning operation, and the like. The video signal processing circuit is advantaged by simple and reliable solution, high stability, and strong anti-interference property.
The paradigm of Adiabatic Quantum Computation (AQC) proposed by Farhi *et al.*  about a decade ago is a simple yet intriguing approach to problem solving on a quantum computer. Unlike the leading paradigm of circuit-based quantum computing, AQC is an analog continuous-time method that does not require the design and use of quantum gates. As such, it can in many ways be thought of as a simpler and perhaps more profound method for performing quantum computations that is also easier to implement experimentally . Even though AQC has been shown to be polynomially-equivalent to circuit-based computation , and despite intensive research in the area (see, e.g., Refs.  and references therein), to date, there are almost no clear-cut concrete examples for efficient quantum-adiabatic algorithms that reveal the potentially-powerful "fully-quantum" capabilities encompassed in AQC. One possible reason for that is, presumably, that there is usually no obvious way to 'tailor' the adiabatic algorithm to the specific problem being examined, and to make use of the structure of the problem to, for example, modify the beginning Hamiltonian in a clever way that would speed up the computation (a notable exception is Ref. ). For most of the interesting optimization problems, being able to do so, may be as hard as solving the original problem itself [^1]. Interestingly, one of the few problems for which quantum speedup has been obtained in the context of AQC, is Grover's unstructured search problem , in which one searches for a marked item in an unstructured database. Roland and Cerf  have demonstrated that while the application of the adiabatic algorithm to Grover's problem with a linear rate results in a running time that is of order $N$ – $N$ being the number of items in the database – a carefully chosen variable rate of the adiabatic parameter yields a running time that scales like $\sqrt{N}$, i.e., a quadratic speed-up is gained, similarly to the original result by Grover found for the circuit-based model [^2]. The adiabatic algorithm for Grover's problem utilizes the concept of local adiabatic evolution, in the framework of which the adiabatic parameter is varied not at a constant rate but rather at a variable rate, slowing down in the vicinity of the minimum gap and speeding-up in places where the gap is large. Local adiabatic evolution however can only be efficiently used in cases where one has proper knowledge of the exact behavior of the gap and relevant matrix elements of the system for the problem in question. This is normally not the case. In the Grover problem, the ability to compute the gap and matrix element of the problem stems from prior knowledge of the spectrum of the problem Hamiltonian, which, ultimately, reduces the problem into a simple two-level system . In what follows, we consider a family of unstructured problems, which we refer to as 'scrambled output' models, and show how one may utilize knowledge of the spectrum of the Hamiltonian of the problem to find analog, continuous-time, algorithms that are more efficient than their classical analogues. In that sense, this family of problems is a generalization, or an extension, to the problem solved by Roland and Cerf. In scrambled output problems, one has to find a minimum input configuration (i.e., $\arg \min$) of an $n$-bit function whose set of outputs (and their multiplicities) is given in advance, up to an unknown constant offset. The exact mapping between the $N=2^n$ input configurations and the various outputs is also not given (i.e., it is as though the outputs of a known function have been scrambled in some arbitrary way). As we also discuss later, special cases in this family of problems are the unstructured database search problem considered by Roland and Cerf , certain variants of the random energy model and the functions of the Deutsch-Josza problem. We illustrate the manner in which AQC may be used to solve scrambled output problems by constructing efficient analog continuous-time algorithms for two specific examples: the Deutsch-Josza  problem for which we find an efficient $O(1)$ deterministic solution, and a variant of the random energy model, for which the minimum energy configuration is found quadratically faster than the corresponding classical algorithms. The paper is organized as follows. In the next section we briefly discuss the principles of the Quantum Adiabatic Algorithm that is the heart of AQC, and with which the above models are solved. In Sec. , we describe scrambled output problems in detail. We then study two examples. In Sec , we suggest an analog algorithm for the Deutsch-Josza problem, and in Sec.  we consider a variant of the random energy model. Finally, we conclude with a few comments in Sec. .
The crossing number $cr(G)$ of a graph $G$ is the minimum number of crossings over all possible drawings of $G$ in the plane. Analogously, the $k$-planar crossing number of $G$, denoted by $cr_{k}(G)$, is the minimum number of crossings over all possible drawings of the edges of $G$ in $k$ disjoint planes. We present new bounds on the $k$-planar crossing number of complete graphs and complete bipartite graphs. In particular, for the case of $k=2$, we improve the current best lower bounds on biplanar crossing numbers by a factor of 1.37 for complete graphs, and by a factor of 1.34 for complete bipartite graphs. We extend our results to the $k$-planar crossing number of complete (bipartite) graphs, for any positive integer $k \geq 2$. To better understand the relation between crossing numbers and biplanar crossing numbers, we pose a new problem of finding the largest crossing number that implies biplanarity. In particular, we prove that for every graph $G$, $cr(G) \leq 10$ implies $cr_{2}(G)=0$.
The Gamma-ray Large Area Space Telescope (GLAST) is an international mission that will study the high-energy phenomena in gamma-rays universe . GLAST is scheduled for launch in 2006. GLAST is instrumented with a hodoscope of Silicon planes with slabs of converter, followed by a calorimeter; the hodoscope is surrounded by an anticoincidence (ACD). This instrument, called the Large Area Telescope LAT, is sensitive to gamma rays in the energy range between 30 MeV and 300 GeV. The energy range, the field of view and the angular resolution of the GLAST LAT are vastly improved in comparison with those of its predecessor EGRET (operating in 1991-2000), so that the LAT will provide a factor of 30 or more advance in sensitivity. This improvement should enable the detection of several thousands of new high-energy sources and allow the study of gamma-ray bursts and other transients, the resolution of the gamma-ray sky and diffuse emission, the search for evidence of dark matter and the detection of AGNs, pulsars and SNRs. One can find a detailed description of the scientific goals of GLAST mission and an introduction to the experiment in . GLAST is a complex system, and detailed computer simulations are required to design the instrument, to construct the response function and to predict the background in the orbit. To accomplish these tasks an object-oriented C++ framework called *Gleam* (GLAST LAT Event Analysis Machine) was adopted and implemented. The main components Gleam are the subject of this paper.
The type system of a programming language system PL/LL is described. PL is a simple object oriented programming language and LL is a language for composing PL modules into programs. The goals of the PL/LL system are to enable the programming of efficient object-oriented computations and to provide the powerful linking language LL for facilitating the construction of large programs. The goal of the type system is to ensure efficient and secure object handling through a combination of static and dynamic type checking, and to preserve this property across module boundaries. The solution is based on (i) the module and linking concepts of LL, (ii) a language construct in PL for the safe creation of linked data structures, and (iii) a limited form of type polymorphism and type unification.
In this manuscript, we treat the well-known question whether having the same set of polynomial identities guarantees the isomorphism of algebras. Some obvious restrictions are necessary. In the non-simple case, we have that an algebra $A$ satisfies the same polynomial identities as the sum $A\mathop{\mathrm{\oplus}}A$ of two copies of itself. If the field of coefficients is not algebraically closed, then there exist easy examples of non-isomorphic algebras satisfying the same set of ordinary polynomial identities; for example, the algebra of real quaternions $\mathbb{H}$ and the matrix algebra $M_2(\mathbb{R})$ have the same polynomial identities but $\mathbb{H}\not\cong M_2(\mathbb{R})$. So we need to restrict ourselves to the case of "simple" algebras over algebraically closed fields. Here being simple depends on the full set of structures on the algebras, for example graded-simple, involution-simple, differentially-simple and so on. In the context of Lie algebras this question was settled by Kushkulei and Razmyslov , and in the context of Jordan algebras by Drensky and Racine , and Shestakov and Zaicev obtained the result for arbitrary simple algebras . The case of associative algebras is trivial thanks to Amitsur - Levitzki's Theorem. The case of simple associative algebras graded by abelian group has been resolved by Koshlukov and Zaicev . Having analyzed the structure of $G$-graded simple associative algebras, $G$ not necessarily abelian, Aljadeff and Haile managed to expand the result of Koshlukov and Zaicev to arbitrary groups . Recently, Bianchi and Diniz studied the problem for arbitrary graded-simple algebras, where the grading is by an abelian group . Imposing various restrictions, this isomorphism question can be investigated in the case of other non-simple algebras. In the authors study the question for certain types of gradings on the associative algebras of upper-block triangular matrices. Also, the classification of group gradings on upper triangular matrices together with the proof of the classification of elementary gradings on the same algebra leads to a positive answer in the context of graded associative algebras of upper triangular matrices.
The use of nonsymplectic procedures in particle tracing codes for relativistic electrons leads to errors that can be reduced only at the expense of using very small integration steps. More accurate results are obtained with symplectic transformations for position and momentum. A first-order symplectic integration procedure requires an iterative calculation of the new position coordinates using the old momenta, but the process usually converges in three or four steps. A first-order symplectic algorithm has been coded for cylindrical as well as Cartesian coordinates using the relativistic equations of motion with Hamiltonian variables. The procedure is applied to the steering of a beam of 80-keV electrons by a weak transverse magnetic field superposed on a strong magnetic field in the axial direction. The steering motion is shown to be parallel to the transverse field rather than perpendicular as would be the case without the strong axial field.
We consider a variant of the vehicle routing problem on trees in which the goal is to route a fleet of vehicles to serve a set of clients so that the makespan, the longest distance traveled by one of the vehicles, is minimized. This is referred to as the Minimum Makespan Vehicle Routing problem on trees. We present a polynomial time approximation scheme (PTAS) for this problem. Our main insight is that we can restrict the set of potential solutions without adding too much to the optimal makespan. This simplification relies on partitioning the tree into clusters such that there exists a near-optimal solution in which every tour that visits a given cluster takes on one of a few forms. In particular, there are at most two tours serving clients in any cluster. Our dyamic program then finds a near-optimal set of tours using these simple building blocks within each cluster rather than making decisions for covering each leaf in the tree. This limits the amount of rounding error incurred by the dynamic program, yielding the PTAS.
Based on BP, LM, RBF artificial neural network with a term of momentum, forecasting models for tropical cyclone path of 36, 48, 60 and 72 hours are set up, and run with 100 independent samples. The results show that the models with good fitting generally produce bad forecast. The keys to avoid this embarrassing situation are proper parameters for network structure, corresponding algorithm and suitable predictors. In view of the fact that artificial neural network models lack the mechanism of automatic predictor adaptation, an algorithm of stepwise predictor adaptation of RBF model is proposed in this study. The comparison with the other models shows that the suggested algorithm is worth to be tried in routine forecasting operation.
The problem addressed in this paper is a variant of the classical *network design* problem. In many applications, the average routing cost between pairs of nodes is a sensible network characteristic, one that we seek to minimize. If costs are *additive* (e.g., time delay) and *symmetric*, an edge-weighted, undirected graph $G$ is a suitable model of the connections between endpoints. The task is then to find a spanning subgraph $T$ of $G$ that minimizes the average distance. Johnson *et al.*  study the problem when the total edge weight of $T$ is required to be less than a given budget constraint. They prove this problem to be NP-complete, even in the special case when all weights are equal and the budget constraint forces the solution to be a *spanning tree*. Here we study the problem in a planar embedding: vertices of $G$ are points in the plane, edges of $G$ are straight segments between the points and weights are given as part of the problem. Instead of limiting the total edge weight of the solution, we require the edges of $T$ to be non-intersecting. From a theoretical point of view this turns out to be an essential difference: the problem now has a geometric structure that we can make use of. As an application we could imagine that we wanted to connect $n$ cities with an optimal railroad network using straight line connections and no intersections. We now give a more precise definition of the problem. Given a set of points $S = \{p_1, \dots, p_n\} \subset \mathbb{R}^2$, and weights $w:S^2 \rightarrow \mathbb{R}$, having $w(x,x)=0$ and $w(x,y)=w(y,x)$, for all $x,y \in S$, we want to find a *geometric*, *crossing-free* graph $T$ with vertex set $S$ and edge weights given by $w$, such that the expected distance between two points chosen uniformly at random from $S$ is as small as possible. By *distance* we mean the length of the shortest path in $T$ and we denote it by $d_T$. Since adding an edge cannot increase a distance, it suffices to consider *maximal* crossing-free graphs, i.e., *triangulations*. We call this the Minimum Average Distance Triangulation (madt) problem. The previous formulation, if we omit the normalizing factor, amounts to finding a triangulation $T$ that minimizes the following quantity: $$\mathcal{W}(T) = \displaystyle\sum_{1 \leq i < j \leq n}{d_T(p_i, p_j)}.$$ Similarly, we ask for the triangulation $T$ of the interior of a polygon with $n$ vertices that has the minimum value $\mathcal{W}(T)$. In this case the triangulation consists of all boundary edges and a subset of the diagonals of the polygon. We note that in mathematical chemistry, the quantity $\mathcal{W}(T)$ is a widely used characteristic of molecular structures, known as Wiener index . Efficient computation of the Wiener index for special graphs, as well as its combinatorial properties have been the subject of significant research .
Graph Neural Networks (GNNs) are a class of deep learning methods for reasoning about graphs . Their significance lies in incorporating both the feature information and structural information of a graph, which allows deriving new insights from a plethora of data . Unfortunately, similar to other deep learning methods, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), GNNs have a notable drawback: the computations that lead to a prediction cannot be interpreted directly . In addition, it can be argued that the interpretation of GNNs is more difficult than that of alternative methods, as two types of information, feature and structural information, are combined during decision-making . This lack of transparency and accountability impedes trust , which in turn hinders the deployment of such models in safety-critical scenarios. Furthermore, this opacity hinders gaining an insight into potential shortcomings of the model and understanding what aspects of the data are significant for the task to be solved . Recent research has attempted to improve the understanding of GNNs by producing various explainability techniques . A prominent example is GNNExplainer , which finds the sub-graph structures relevant for a prediction by maximising the mutual information between the prediction of a GNN and the distribution of possible subgraphs. However, a significant drawback of this method is that the explanations are local, which means they are specific to a single prediction. The goal of our work is to improve GNN explainability through concept-based explanations, which are produced by putting the human in the loop. Concept-based explanations are explanations in the form of small higher-level units of information . The exploration of such explanations is motivated by concept-based explanations being easily accessible by humans . Moreover, concept-based explanations act as a global explanation of the class, which improves the overall insight into the model . We pay particular attention to involving the user in the discovery and extraction process, as ultimately the user must reason about the explanations extracted. To the best of our knowledge, this is the first work that explores human-in-the-loop concept representation learning in the context of GNNs. We make the following contributions, with the source code available on GitHub [^1]: - an unsupervised method for concept discovery and extraction in GNNs for post-hoc explanation, which takes the user into account; - the resulting tool Graph Concept Explainer (GCExplainer); - a metric evaluating the purity of concepts discovered; - an evaluation of the approach on five node classification and two graph classification datasets; Our method is based on the Automated Concept-based Explanation (ACE) algorithm and the work presented by . Figure provides an overview of the methodology proposed. Our results show that GCExplainer allows to discover and extract high quality concepts, outperforming GNNExplainer based on our qualitative evaluation. Our results suggest that GCExplainer can be used as a general framework due to its strong degree of generality, which finds application across a wide variety of GNN-based learning tasks.
A collection of graphs $\mathcal{H}$ is said to have the *Erdős-Pósa* property if there exists a function $f : \mathbb{N} \to \mathbb{R}_+$ such that for every natural $k$ and every graph $G$ at least one of the following two assertions holds: - $G$ contains a collection of $k$ vertex-disjoint subgraphs $G_1$, …, $G_k$, each isomorphic to a graph in $\mathcal{H}$; - $G$ contains a set $X$ of $f(k)$ vertices such that no subgraph of $G-X$ is isomorphic to a graph in $\mathcal{H}$. A collection $G_1$, …, $G_k$ as above is called a *packing* and a set $X$ as above is called a *transversal*. These definitions are motivated by a celebrated result of Erdős and Pósa . Denoting by $C_t$ the cycle of length $t$, they proved that $\mathcal{H} = \{C_t \mid t \geqslant 3\}$ has the Erdős-Pósa property. They obtain a function $f$ in $\Theta(k \log k)$ and prove that this function $f$ is best possible, up to a constant. Our main result is as follows. **Theorem 1**. * There exists a function $f : \mathbb{N}^2 \to \mathbb{R}_+$ with $f(k,\ell) = O(\ell \cdot k \log k)$ such that for every $k, \ell \in \mathbb{N}$ and every graph $G$, at least one of the two following assertions holds:* - *$G$ contains $k$ vertex-disjoint cycles of length at least $\ell$;* - *$G$ contains a set of $f(k,\ell)$ vertices meeting all the cycles of length at least $\ell$.* This result implies in particular that the collection $\mathcal{H} := \{C_t \mid t \geqslant \ell\}$ has the Erdős-Pósa property for each fixed natural $\ell$. Birmelé, Bondy and Reed proved Theorem with $f(k,\ell) = \Theta(\ell \cdot k^2)$ and left as an open problem to find the correct order of magnitude of $f$. Theorem essentially settles this problem. Our function $f$ is best possible up to a constant for each fixed $\ell$. Moreover, it is also best possible up to a constant for each fixed $k$. However, we do not known whether it is best possible up to a constant when both $k$ and $\ell$ vary. Before giving the outline of this paper, we mention a few relevant references concerning the case where $\mathcal{H}$ consists of all the graphs containing a fixed graph $H$ as a minor. Robertson and Seymour have shown that $\mathcal{H}$ has the Erdős-Pósa property if and only if $H$ is planar . They left wide open the problem of determining the order of magnitude of the best possible function $f$, for each fixed planar graph $H$. Our main result answers this problem when $H$ is a cycle. A recent paper of Fiorini, Joret and Wood  answers the problem when $H$ is a forest. In this case, it turns out that $f$ can be taken to be linear in $k$. The outline of the rest of the paper is as follows. We begin with some preliminaries in Section . The proof of Theorem is given in Section .
All human behavior representations (HBRs) simulate some aspects of people, either individually or as groups. HBRs differ from simulations of other complex phenomena by their knowledge bases, effectively sophisticated computer programs written in a knowledge representation language. As a discipline in simulation technology, HBR validation is still relatively immature with no theory, few tools and techniques and considerable but poorly documented experience. Two sources of information establish a firm foundation for the advancement of HBR validation technology, a broad experiential base and a more mature related field, knowledge-based system (KBS) verification and validation. HBR validators have learned many lessons from existing and future systems that deal with requirements, the subject matter expert-software engineer process, association of the HBR with the synthetic natural environment, and documentation. These lessons supply a rich source of guidance for future HBR validation activities. KBS
The working and learning processes throughout the world have changed the people at large by the ict revolution. Moreover, ICT has also created a new type of divide in the society that is those who have and those who do not have access and ability to use ICT. This paper investigates the extent of ICT diffusion in India and evaluates inter-state and rural-urban technology divide. This paper also investigates extent of ICT diffusion and digital divide across the selected Indian states. In order to assess the level of ICT diffusion across the Indian states, ICT diffusion index has been constructed by using three indicators i.e. cellular subscribers, teledensity of states, and percentage of villages under VPTs.
This paper describes a pilot study with a computed-assisted translation workbench aiming at testing the integration of online and active learning features. We investigate the effect of these features on translation productivity, using interactive translation prediction (ITP) as a baseline. User activity data were collected from five beta testers using key-logging and eye-tracking. User feedback was also collected at the end of the experiments in the form of retrospective think-aloud protocols. We found that OL performs better than ITP, especially in terms of translation speed. In addition, AL provides better translation quality than ITP for the same levels of user effort. We plan to incorporate these features in the final version of the workbench.
Boolean matrices (with $0,1$ entries) and sign matrices (with $\pm 1$ entries) naturally appear in many areas of research[^5]. We use them e.g. to represent set systems and graphs in combinatorics, hypothesis classes in learning theory, and boolean functions in communication complexity. This work further investigates the relation between two useful complexity measures on sign matrices. **Definition 1** (Sign rank). *For a real matrix $M$ with no zero entries, let $\text{sign}(M)$ denote the sign matrix such that $(\text{sign}(M))_{i,j}=\text{sign}(M_{i,j})$ for all $i,j$. The sign rank of a sign matrix $S$ is defined as $$\text{sign-rank}(S) = \min \{ \text{rank}(M) : \text{sign}(M)=S\},$$ where the rank is over the real numbers. It captures the minimum dimension of a real space in which the matrix can be embedded using half spaces through the origin [^6] (see for example ).* **Definition 2** (Vapnik-Chervonenkis dimension). *The VC dimension of a sign matrix $S$, denoted $VC(S)$, is defined as follows. A subset $C$ of the columns of $S$ is called shattered if each of the $2^{|C|}$ different patterns of ones and minus ones appears in some row in the restriction of $S$ to the columns in $C$. The VC dimension of $S$ is the maximum size of a shattered subset of columns. It captures the size of the minimum $\epsilon$-net for the underlying set system .* The VC dimension and the sign rank appear in various areas of computer science and mathematics. One important example is learning theory, where the VC dimension captures the sample complexity of learning in the PAC model , and the sign rank relates to the generalization guarantees of practical learning algorithms, such as support vector machines, large margin classifiers, and kernel classifiers . Loosely speaking, the VC dimension relates to learnability, while sign rank relates to learnability by linear classifiers. Another example is communication complexity, where the sign rank is equivalent to the unbounded error randomized communication complexity , and the VC dimension relates to one round distributional communication complexity under product distributions , The main focus of this work is how large can the sign rank be for a given VC dimension. In learning theory, this question concerns the universality of linear classifiers. In communication complexity, this concerns the difference between randomized communication complexity with unbounded error and between communication complexity under product distribution with bounded error. Previous works have studied these differences from the communication complexity perspective  and the learning theory perspective . In this work we provide explicit matrices and stronger separations compared to those of  and . See the discussions in Section  and Section  for more details.
Recently, as the deep learning has achieved a great development, deep neural networks (DNNs) have shown remarkable performances in various research areas, such as computer vision , natural language processing , and speech processing . Nevertheless, there exists one significant problem to be solved in utilizing DNNs robustly in the real-world, that is, the environment discrepancy problem. The environment discrepancy problem occurs when the training data distribution and testing data distribution are mismatched, and it results in the serious performance degradations of DNNs . Therefore, although end-users want to exploit well-trained DNNs in their own testing environment, they would fail to experience the powerfulness of DNNs because of the aforementioned problem. For example, as described in Fig. (a), an end-user tries to adopt an object detection model which is trained with clean weather data, and of course, it detects objects successfully under thte same clean weather condition. However, as shown in Fig. (b), when the user wants to detect objects under adverse weather condition, the detection model would fail to conduct the robust object detection, because the testing environment (*i*.*e*., adverse weather) differs from the training environment (*i*.*e*., clean weather) . Therefore, end-users are recommended to find and exploit the DNN models well-trained on the training data that is consistent with their own testing environment. One possible approach to alleviate such a problem is Domain Adaptation (DA) which aims to reduce the domain gap between the source and target domains by learning the domain invariant representations . However, such DA methods are usually required to know the internal architecture of the network and have an access to both source and target data simultaneously for learning the domain invariant features. Therefore, it is time-consuming and difficult for end-users to understand the behavior of the network and use both kinds of data. In this paper, we focus on how to make end-users enjoy the beneficial performances of well-trained DNN models even with different testing data distribution. To this end, we introduce a method that end-users can adapt the pretrained model into their testing environment without any knowledge of model architecture or finetuning the model, by using only a small number of data in testing environment. Motivated by the recent success in the input-level transformation to convert the originally learned task to another task , instead of modifying the weight parameters of the pretrained models, we propose to use an additional input, called *meta input*, to match the distributions of testing data with that of training data. Specifically, we suppose that an end-user wants to adopt a pretrained model under different testing environment with a few labeled/unlabeled testing data only, while the user cannot have an access to the training data which is used to pretrain the model. Then, the proposed meta input can be optimized to transform the testing data distribution to be aligned with the training data distribution where the pretrained model operates properly. After that, the meta input can be embedded into the testing data to make the pretrained model perform well under different testing environment. For example, as shown in Fig. (c), the meta input is embedded into the testing data, so that the pretrained detection model conducts the robust object detection even under adverse weather condition without modifying its weight parameters. The proposed meta input can be optimized simply with any gradient-based training algorithm by using a few labeled, or unlabeled, data of testing environment. With the meta input, the learned knowledge of pretrained DNN models can be extended to diverse testing environments without knowing the network architecture and modifying its weight parameters. Therefore, end-users can enjoy the powerfulness of off-the-shelf DNNs on their own testing environment. We verify both effectiveness and practicality of the proposed meta input in the real-world through the extensive experiments in the three tasks, image classification, object detection, and visual speech recognition. Our contributions can be summarized as follows: - Since the proposed meta input can match the distribution of testing data with that of training data, the knowledge the pretrained DNN models already learned can be utilized properly even under different environments. - Different from the existing DA methods, the proposed method does not require any knowledge of the model architecture, modification of its weight parameters and training data (which is used for pretraining the model), and it only needs a small number of testing data. - The effectiveness and versatility of the proposed meta input are corroborated by the comprehensive experiments on three practical tasks, image classification, object detection, and visual speech recognition.
This paper considers robust code division multiple access (CDMA) system design and capacity analysis under disguised jamming, where the jammer generates a fake signal using the same spreading code, constellation, and pulse shaping filter as that of the authorized signal. Unlike Gaussian jamming, which is destructive only when jamming is dominant, disguised jamming can be devastating even if the jamming power is comparable to the signal power. In this paper, first, we analyze the performance of the conventional CDMA under disguised jamming, and show that due to the symmetricity between the authorized signal and the jamming interference, the receiver cannot really distinguish the authorized signal from jamming, leading to complete communication failure. Second, we propose to combat disguised jamming using secure scrambling. Instead of using conventional scrambling codes, we apply advanced encryption standard to generate the security-enhanced scrambling codes. Theoretical analysis based on the arbitrarily varying channel model shows that the capacity of conventional CDMA without secure scrambling under disguised jamming is actually zero; however, secure scrambling can break the symmetricity between the authorized signal and the jamming interference, and hence ensures positive channel capacity under disguised jamming. Numerical examples are provided to demonstrate the effectiveness of secure scrambling in combating disguised jamming.
Coordination algorithms for multiple autonomous vehicles and decentralized estimation techniques for handling data coming from distributed sensor networks have attracted large attention in recent years. This is mainly motivated by that both coordinated control and distributed estimation have applications in many areas, such as coordinated flocking of mobile vehicles , cooperative control of unmanned air and underwater vehicles  , multi-vehicle tracking with limited sensor information , monitoring very large scale areas with fine resolution and collaborative estimation over wireless sensor networks . Typically, both in coordinated control and in distributed estimation the agents need to communicate data in order to execute a task. In particular they may need to agree on the value of certain coordination state variables. One expects that, in order to achieve coordination, the variables shared by the agents, converge to a common value, asymptotically. The problem of designing controllers that lead to such asymptotic coordination is called *coordinated consensus*, see for example  and references therein. Generalisation to high order consensus  and nonholonomic agents  have also been explored. One of the simplest consensus problems that has been mostly studied consists in starting from systems described by an integrator and in finding a feedback control yielding consensus, namely driving all the states to the same value . The information exchange is modeled by a directed graph describing in which pair of agents the data transmission is allowed. The situation mostly treated in the literature is when each agent has the possibility of communicate its state to the other agents that are positioned inside a neighborhood and the communication network is time-varying . Robustness to communication link failure and the effects of time delays has been considered recently. Randomly time-varying networks have also been analyzed in . Moreover, a first analysis involving quantized data transmission has been proposed in . In this paper we consider the consensus problem from a different perspective. We are interested to characterize the relationship between the amount of information exchanged by the agents and the achievable control performance. More precisely we assume that $N$ agents are given initial positions in the euclidean space, and move in discrete-time in order to reach the average of their initial positions. This problem is also called *average coordinated consensus*. Every agent asks several agents their position before taking a decision to modify its own position. We impose that, in order to limit costs of communication, every agent communicates with only $\nu$ agents (including itself), where $\nu < N$. This means that in the graph describing the communications between agents, the max in-degree is at most $\nu$. In this paper, we exhibit a family of strategies for solving this problem based on de Bruijn's graphs and we prove that according to a suitable criteria this is the best that one can do. Precisely we compute its performances according two criteria: rate of convergence to the average of their initial positions and an LQR criterion. We find that a deadbeat strategy is optimal according to the rate of convergence, and nearly optimal according to the LQR criterion. Finally, we compare it with an another strategy having limited communication and exhibiting symmetries: the Cayley strategies . It should be noted however that our strategy is limited to the case where the number of agents is an exact power of $\nu$. Whether it is possible to build a linear time-invariant deadbeat strategy for any number of agents (for a given $\nu$) remains an open problem. The paper is organized as follows. In Section II we provide some basic notions of graph theory and some notational conventions. In Section III we formally define the average consensus problem. In Section IV we introduce the block Kronecker strategy. In Section V we show that the block Kronecker strategy is the quickest possible strategy and we compare it with the Cayley strategy. In section VI we evaluate the performance of the block Kronecker strategy according to suitable quadratic criteria. Finally we gather our conclusions in Section VII.
We analyze the characteristics of Internet traffic by considering its correlation properties. First, a simple measurement is performed, by using the autocorrelation technique, to illustrate the correlation properties of the HTTP/Web and the e-mail (SMTP) services. The ACF (autocorrelation factor) and PACF (partial ACF) measurements are applied to examine the correlation characteristics of the traffic correlation within consecutive hours. The measurement results indicate that correlation with one-hour lag is acceptable for both services. Then the autocorrelation of each service is performed, respectively, to understand the correlation property of these two services. The experiment results illustrate that the HTTP/Web service has much higher self-correlation, especially for one hour lag, than the e-mail service. Finally, a heuristic bandwidth allocation method is proposed by considering the correlation characteristics. The proposed method is suitable for the management of Internet bandwidth dynamically.
Spatial response resampling (SR2): Accounting for the spatial point spread function in hyperspectral image resampling. With the increased availability of hyperspectral imaging (HSI) data at various scales (0.03-30 m), the role of simulation is becoming increasingly important in data analysis and applications. There are few commercially available tools to spatially degrade imagery based on the spatial response of a coarser resolution sensor. Instead, HSI data are typically spatially degraded using nearest neighbor, pixel aggregate or cubic convolution approaches. Without accounting for the spatial response of the simulated sensor, these approaches yield unrealistically sharp images. This article describes the spatial response resampling (SR2) workflow, a novel approach to degrade georeferenced raster HSI data based on the spatial response of a coarser resolution sensor. The workflow is open source and widely available for personal, academic or commercial use with no restrictions. The importance of the SR2 workflow is shown with three practical applications (data cross-validation, flight planning and data fusion of separate VNIR and SWIR images).•The SR2 workflow derives the point spread function of a specified HSI sensor based on nominal data acquisition parameters (e.g., integration time, altitude, speed), convolving it with a finer resolution HSI dataset for data simulation.•To make the workflow approachable for end users, we provide a MATLAB function that implements the SR2 methodology.
k Nearest Neighbours (k-NN) search is a fundamental problem in many computer vision and machine learning tasks. These tasks frequently involve a large number of high-dimensional vectors, which require intensive computations. Recent research work has shown that the Graphics Processing Unit (GPU) is a promising platform for solving k-NN search. However, these search algorithms often meet a serious bottleneck on GPUs due to a selection procedure, called k-selection, which is the final stage of k-NN and significantly affects the overall performance. In this paper, we propose new data structures and optimization techniques to accelerate k-selection on GPUs. Three key techniques are proposed: Merge Queue, Buffered Search and Hierarchical Partition. Compared with previous works, the proposed techniques can significantly improve the computing efficiency of k-selection on GPUs. Experimental results show that our techniques can achieve an up to 4:2× performance improvement over the state-of-the-art methods.
Comparing measurements is fundamental to the sciences, and so it is not surprising that ordering, bounding, and optimizing real-valued expressions is central to mathematics. A host of computational methods have been developed to support such reasoning, using symbolic or numeric methods, or both. For example, there are well-developed methods of determining the satisfiability or unsatisfiability of linear inequalities , polynomial inequalities , nonlinear inequalities involving functions that can be approximated numerically , and inequalities involving convex functions . The "satisfiability modulo theories" framework provides one way of integrating such methods with ordinary logical reasoning and proof search; integration with resolution theorem proving methods has also been explored . Interactive theorem provers like Isabelle and HOL Light now incorporate various such methods, either constructing correctness proofs along the way, or reconstructing them from appropriate certificates. (For a small sample, see .) Such systems provide powerful tools to support interactive theorem proving. But, frustratingly, they often fail when it comes to fairly routine calculations, leaving users to carry out explicit calculations painstakingly by hand. Consider, for example, the following valid implication: $$0 < x < y, \; u < v \; \Rightarrow \;2 u + \mathtt{exp}(1 + x + x^4) < 2 v + \mathtt{exp}(1 + y + y^4)$$ The inference is not contained in linear arithmetic or even the theory of real-closed fields. The inference is tight, so symbolic or numeric approximations to the exponential function are of no use. Backchaining using monotonicity properties of addition, multiplication, and exponentiation might suggest reducing the goal to subgoals $2 u < 2 v$ and $\mathtt{exp}(1 + x + x^4) < \mathtt{exp}(1 + y + y^4)$, but this introduces some unsettling nondeterminism. After all, one could just as well reduce the goal to - $2 u < \mathtt{exp}(1 + y + y^4)$ and $\mathtt{exp}(1 + x + x^4) < 2 v$, or - $2 u + \mathtt{exp}(1 + x + x^4) < 2 v$ and $0 < \mathtt{exp}(1 + y + y^4)$, or even - $2 u < 2 v + 7$ and $\mathtt{exp}(1 + x + x^4) < \mathtt{exp}(1 + y + y^4) - 7$. And yet, the inference is entirely straightforward. With the hypothesis $u < v$ in mind, you probably noticed right away that the terms $2u$ and $2 v$ can be compared; similarly, the comparison between $x$ and $y$ leads to comparisons between $x^4$ and $y^4$, then $1 + x + x^4$ and $1 + y + y^4$, and so on. The method we propose is based on such heuristically guided forward reasoning, using properties of addition, multiplication, and the function symbols involved. As is common for resolution theorem proving, we try to establish the theorem above by negating the conclusion and deriving a contradiction. We then proceed as follows: - Put all terms involved into a canonical normal form. This enables us to recognize terms that are the same up to a scalar multiple, and up to associativity and commutativity of addition and multiplication. - Iteratively call specialized modules to learn new comparisons between subterms, and add these new comparisons to a common "blackboard" structure, which can be accessed by all modules. The theorem is verified when any given module derives a contradiction using this common information. The procedure fails when none of the modules can learn anything new. We will see in Section  that the method is far from complete, and may not even terminate. On the other hand, it is flexible and extensible, and easily verifies a number of inferences that are not obtained using more principled methods. As a result, it provides a useful complement to more conventional approaches. We have designed and implemented modules to learn comparisons from the additive and multiplicative structure of terms, a module to instantiate axioms involving arbitrary functions symbols, and special-purpose modules for common functions like min, max, absolute value, exp, and log. The additive and multiplicative modules have two different implementations, with different characteristic strengths and weaknesses. The first uses a natural but naive Fourier-Motzkin elimination, and the second uses more refined geometric techniques. Our prototype implementation, written in Python, is available online: > <https://github.com/avigad/polya> We have named the system "Polya," after George Pólya, in recognition of his work on inequalities as well as his thoughtful studies of heuristic methods in mathematics (e.g.  ). The general idea of deriving inequalities by putting terms in a normal form and combining specialized modules is found in Avigad and Friedman , which examines what happens when the additive and multiplicative fragments of real arithmetic are combined. This is analogous to the situation handled by SMT solvers, with the added twist that the languages in question share inequality symbols and multiplication by constant coefficients in addition to the equality symbol. Avigad and Friedman show that the universal fragment remains decidable even if both theories include multiplication by rational constants, while the full first-order theory is undecidable. The former decidability result, however, is entirely impractical, for reasons discussed there. Rather, it is the general framework for combining decision procedures and the use of canonical normal forms that we make use of here. The outline of the paper is as follows. In Section , we describe the general blackboard architecture which is the shared interface for the different modules, and the canonical form for terms. In Section , we describe the implementation of the additive and multiplicative modules based on the Fourier-Motzkin algorithm, whereas in Section  we describe the implementation based on existing tools from discrete geometry. In Section , we describe a module that instantiates general axioms, and in Section  we describe more specialized modules that contribute information to the blackboard. In Section , we provide a number of examples that help characterize the method's strengths and weaknesses. Finally, in Section , we discuss some of the many ways that the method can be extended, as well as ways in which the implementation may be improved. This paper is a revised and expanded version of the conference paper . The extensions described in this paper, chiefly the additional modules described in Section , are due to Avigad and Lewis. More detailed descriptions of some of the representations and algorithms can be found in Lewis' MS thesis .
The paper introduces the concept of teaching by using computer multimedia,discusses its basic requests for preparing for and giving lessons and connection with traditional teaching ,and indicates some disadvantageous elements to be overcomed,with a view to increase the efficiency of computer multimedia teaching.
In this article, we only consider simple, undirected and connected graphs. Suppose $G=(V_{G},E_{G})$ is a graph with vertex set $V_{G}=\{v_1,v_2,\cdots,v_n\}$ and edge set $E_{G}=\{e_1,e_2,\cdots,e_m\}$. Let $D(G)=diag\{d_{1},d_{2},\cdots,d_{n}\}$ be a degree diagonal matrix, where $d_{i}$ is the degree of $v_{i}$ in $G$. The adjacency matrix $A(G)$ of $G$ is an $(0,1)$-matrix with order $n$. Then we can get the Laplacian matrix, which is defined as $L(G)=D(G)-A(G)$. Let 0=$\mu_{1}<\mu_{2}\leq\cdots\leq\mu_{n}$ be the eigenvalues of $L(G)$. According to the characteristic polynomial of the matrix $L(G)$, we can get Laplacian spectrum of $L_{n}$ . For more notations and terminologies, one can be referred to . At this point, some parameters are introduced. The distance, denoted by $d_{ij}$, is the length of a shortest path between nodes $i$ and $j$, which was named as Wiener index . This is well-known distance-based topological descriptor, that is $$\begin{aligned} W(G)=\sum_{i<j}d_{ij}. \end{aligned}$$ In the electrical network theory, the resistance distance was firstly proposed by Klein and Randić . According to this concept, we obtain the interpretation of physical community: the resistance distance between the nodes $i$ and $j$ of the graph $G$ is denoted by $r_{ij}$. One well-known resistance distance-based parameter called the Kirchhoff index is given by $$Kf(G)=\sum_{i<j}r_{ij}.$$ The Kirchhoff index has attracted extensive attentions due to its wide applications in the fields of physics, chemistry and others. Despite all that, it is hard to deal with the Kirchhoff index of complex graphs. Thus, some researchers try to find some new techniques to compute the Kirchhoff index and obtain its formula. Given an $n$-vertex graph $G$, Klein and Lovász proved independently that $$\begin{aligned} K f(G)=\sum_{\{u,v\}\subseteq V} r_{(G)}(u,v)=n\sum_{k=2}^{n}\frac{1}{\mu_{k}} , \end{aligned}$$ where $0=\mu_{1}<\mu_{2}\leq\cdots\leq\mu_{n}(n\geq2)$ are the eigenvalues of $L(G)$. The number of spanning trees of the graph $G$, also known as the complexity of $G$, is the number of subgraphs that contain all the vertices of $G$ . In addition, all those subgraphs must be trees. According to the decomposition theorem of Laplacian polynomial, Y. Yang et al., 2008 obtained the Laplacian spectrum of linear hexagonal networks. J. Huang et al. got the normalized Laplacian spectrum of linear hexagonal networks by using the decomposition theorem. Then, the Laplacian spectrum of linear phenylenes were derived . Besides, Z. Zhu and J. Liu obtained the Laplacian spectrum of generalized phenylenes. Thus, the extended considerations for calculating the Laplacian spectrum of linear octagonal-quadrilateral networks are shown in the following sections. In the following, we introduce some theorems and notations in Section . Then, we derive the Laplacian spectrum of $L_n$ by using the relationship between the coefficients and roots in Section . An example of the result is given in Section . The conclusion is summarized in Section .
The purpose of this paper is to extend the notion of self-similarity to one-parameter families of stochastic processes. Recall that a real-valued stochastic process $\left\{ X(t),t\geq 0\right\}$ is called *self-similar with* *Hurst exponent* $H\in (0,1)$ if it satisfies the scaling property $$X(ct)\overset{d}{=}c^{H}X(t)\text{ for }t\geq 0\text{,} \label{stoch}$$ for all $c>0$, where $\overset{d}{=}$ denotes equality of the finite-dimensional distributions . The corresponding process with drift $\mu \in \mathbb{R}$, defined by $$X(\mu ;t)=X(t)+\mu t\text{ for }t\geq 0\text{,} \label{drift}$$ does not, however, satisfy () for $\mu \neq 0,$ an example being Brownian motion with drift. Instead, the family of processes () satisfies the following scaling property, $$X(\mu c^{H-1};ct)\overset{d}{=}c^{H}X(\mu ;t)\text{ for }t\geq 0\text{,} \label{sdrift}$$ for all $c>0$, and we shall hence propose () as a new definition of self-similarity for families of stochastic processes, cf. Definition ss-def below. We show that there are in fact many important families of stochastic processes that satisfy () without having the drift form ( drift), and for values of $H$ that do not necessarily belong to $(0,1)$. One such example is the class of Hougaard Lévy processes , which includes for example the family of Poisson processes ($H=0$), certain gamma compound Poisson processes ($H<0$), and the family of inverse Gaussian processes ($H=2$), cf. . A further example is a new class of fractional Hougaard motions defined as moving averages of Hougaard Lévy processes, generalizing fractional Brownian motion. As we shall see, such processes have many properties in common with ordinary self-similar processes, as reflected for example in the familiar form of their covariance functions. This represents an important step forward compared with conventional self-similar processes, where the only processes with finite variance are the fractional Brownian motions. In Section , we present the new notion of self-similarity for families of stochastic processes. We show that their covariance structure is completely determined by the so-called variance function, and we study the role of power variance functions. In Section  secExpo we explore the connection between self-similarity and exponential tilting, and show the self-similarity of Hougaard Lévy processes. In Section  we show the self-similarity of the class of fractional Hougaard motions using their moving average representation. In Section , we introduce a Lamperti-type transformation, which transforms a self-similar family into a family of stationary processes. In Section  we consider some Lamperti-type limit theorems, where families of self-similar processes appear as limits of suitably scaled families of stochastic processes. Finally, in Section twinfty, we investigate the case $H=1$ and its relation with exponential variance functions.
An improved method of guiding a charged particle beam to compensate for the time required for charged particles passing through the system to change one or more of the deflector signals. According to one embodiment of the present invention, prior to digital to analog (D / A) conversion digital filter applied to the scan mode, in order to reduce or eliminate overshoot effects may be caused by TOF errors. In other embodiments, the use of an analog filter or an amplifier having a lower bandwidth signal may also be used to compensate TOF errors. By changing the scan mode, can be significantly reduced or eliminated overshoot effects.
In the present work forecasting future sales of a company involved in uranium enrichment is considered a problem of allocating company's produce to global regional markets. Based on this concept optimization problem of company's effective portfolio is formulated and solved. Relative total company share in the world is defined by the balance parity between probability distributions of global demand for enriched uranium and enriched uranium manufacture at competitive enterprises. Effective portfolios of company shares in the world regional markets take into account risk and uncertainty and are calculated from probability distribution of company's relative world share, regional market quotas and restrictions, market risk measures and expected market efficiencies. Incorporating fuzzy and probability approaches to portfolio selection allows for accurate market information employment and exhaustive analysis of company potential and market opportunities.
The present paper is meant to give an algorithm for computing matrix corepresentations of the quantum groups $SL_q(N)$ and $SU_q(N)$. This is done by first computing the corepresentations for $N=2$ as in cite\[KS\] then a simple combinatorial re-indexing of basis elements leads one to a similar method for computing $N>2$. The computations will be given explicitly when possible, however the closed form of the corepresentation is somewhat difficult to write down. The paper following this one will use these corepresentations to compute the Haar functional on $SU_q(N)$ for all $N$.
Absfrset. The notion of the Cauchy indexof a real rational scalar function is generalized todefinc the Cauchy index of a real rational symmetric matrix in terms of the behavior of the matrix at real singularities of its elements. Alternative characterizations are obtained which flow from iepresenrations of the real rational matrix using a Laurent series, a matrix fraction description. and a state variable realization. These characteristics involve a Hankri and a Bezoutian matrix. A matrix Sturm theoremisobtained and its use for evaluating the indexis indicated. Descriptionsof certain impedance matrices arising in passive circuit theory are given using the matrix Cauchy index.
The integration of the Global Positioning System (GPS) and Inertial Navigation Systems (INSs) is often used to provide accurate positioning and navigation information. For applications requiring the highest accuracy, the quality of the inertial sensors required is usually assumed to be very high. This dissertation investigates the integration of GPS with a tactical-grade Inertial Measurement Unit (IMU) for centimetre-level navigation in real-time. Different GPS/INS integration strategies are investigated to assess their relative performance in terms of position and velocity accuracy during partial and complete data outages, carrier phase ambiguity resolution after such data outages, and the overall statistical reliability of the system. In terms of statistical reliability, the traditional equations used in dynamic systems are redeveloped in light of some practical considerations, including centralized and decentralized filter architectures, and sequential versus simultaneous measurement updating. Results show that the integrated solution outperforms the GPS-only approach in all areas. The difference between loose and tight integration strategies was most significant for ambiguity resolution and system reliability. The integrated solution is capable of providing decimetre-level accuracy or better for durations of about five or ten seconds when a complete or partial GPS outage is simulated. This level of accuracy, extended over longer time intervals, is shown to reduce the time required to resolve the L1 ambiguities by an average of about 50% or more for data outages as long as 30 seconds when using a tight integration strategy. More importantly, the reliability of the ambiguity resolution process is improved with the integrated
The numerical value of a control signal to control a phase shifted clock so that the control signal is established at least several discrete delay times applied to the clock. Control signal value is applied to the control of discrete time delay selection. A phase-locked loop analog - digital converter in response to information representing variable phase bits and the selected phase shifted clock to control the signal value. A time delay unit and the like to obtain a series of at least several replica signal is selected. In one embodiment, a multiplexer and a value of the clock in response to the reflected coded signal so as to control from a delay unit connected to an output terminal. In another embodiment, to control the number and interposed between an output clock delay unit in series by the signal value.
The present invention discloses a method and system for killing browser bookmark, wherein the method comprises: a browser back-end server receives the synchronization request from the browser client, the synchronization request includes the user ID and the URL bookmarks; browser backend server storing user account corresponding to the bookmark URL from the received command includes the user account cloud killing the browser client, a user account corresponding to the risk of killing the bookmark URL, the URL is determined that the risk, including risk of killing the URL results back to the browser client. Embodiment of the present invention can improve the safety of browser bookmarks and browser where the client terminal apparatus save storage space.
Convolutional Neural Networks (CNNs) have been proven to be successful in supervised video representation learning with numerous human-annotated labels . Videos contain more complex spatio-temporal contents and a larger data volume. Billions of unlabeled videos emerge on the Internet every day, making supervised video analysis expensive and time-consuming. Thus, how to effectively learn video representations without annotations is an important yet challenging task. Among effective unsupervised learning methods, self-supervised learning has proven to be a promising methodology . Early video self-supervised learning approaches proposed proper tasks with automatically generated labels, thereby encouraging CNNs to learn the transferable features for downstream tasks without human-annotated labels . Recently, with the success of contrastive learning in self-supervised image classification, this method has been widely expended in video self-supervised learning . However, there are obvious limitations in these works. Firstly, previous works only explored discriminating similar features from dissimilar ones while ignoring the intermediate state of learned representations such as the similarity degree, which limits the overall performance. Secondly, less effort has been put on the mutual influence of multiple pretext tasks and various contrastive learning schemes for spatio-temporal representation learning. To address these problems, we propose a novel pretext task, i.e., Spatio-Temporal Overlap Rate (STOR) prediction to percept the similarity degree as the intermediate state to favor contrastive learning and propose a joint optimization framework of contrastive learning and multiple pretext tasks to further enhance the spatio-temporal representation learning. It is observed that given a set of two clips at a specific overlap rate, humans can discriminate the overlap rate when providing candidates (see Figure ). We assume that humans can make it due to their favorable spatio-temporal representation ability. Built upon the observation, we believe that CNNs can learn video representations better by discriminating such overlap rates. The assumption is that CNNs can only succeed in such a spatio-temporal overlap rate reasoning task when it learns representative spatio-temporal features. To the best of our knowledge, this is the first work that attempts to capture the spatio-temporal degree of similarity between generated samples for self-supervised learning. Moreover, we propose a new and effective data augmentation method for the pretext task. This data augmentation method can generate samples with random overlapped spatio-temporal regions, while keeping the randomness of samples. In order to study the mutual influence of contrastive learning and pretext tasks for better spatio-temporal representation learning, we comprehensively study the joint optimization scheme. Specifically, we study four popular contrastive learning frameworks, i.e., SimCLR , MoCo , BYOL and SimSiam in our joint optimization scheme. In terms of pretext tasks, playback rate prediction has been proven to be successful in video self-supervised learning , but it tends to focus on motion pattern thus may not learn spatial pattern well . To overcome this problem, we combine rotation prediction task to further strengthen spatial appearance features. Finally, a joint optimization framework of STOR, contrastive learning, playback rate prediction and rotation prediction is proposed in this work, namely contrastive spatio-temporal pretext learning (CSTP). Extensive experimental evaluations on two downstream video understanding tasks demonstrate the effectiveness of the proposed approach. Specifically, several architectures including C3D , R(2+1)D and S3D are presented and different weights of the joint learning are explored in this work. The experimental results verifies that the proposed STOR can well cooperate with different contrastive learning frameworks and other pretext tasks. The proposed joint learning framework CSTP outperforms state-of-the-art approaches in the two downstream video understanding tasks. The main contributions of this work can be summarized as follows: - Taking the degree of similarity of training samples into account, we propose a novel pretext task, i.e., spatio-temporal overlap rate prediction for video self-supervised learning. The pretext task can enhance spatio-temporal representation learning via discriminating the overlap regions of the training samples. - We propose a joint optimization framework which combines contrastive learning with spatio-temporal pretext tasks. And we conduct comprehensive experiments to study the mutual influence of each component of the framework. - Our method achieves state-of-the-art performance on two downstream tasks, action recognition and video retrieval, across two datasets, UCF-101 and HMDB-51. Ablation studies demonstrate the efficacy of the proposed STOR and the mutual influence of contrastive learning and pretext tasks.
Laser triangulation 3D scanning is performed on a setup that closely matches intended target size, desired spatial resolution, intended environment and target surface types. Modular test setup is proposed to enable laser triangulation 3D scanning using exchangeable sets of components supporting wider range of possible scanning options. Software framework is proposed to programmatically control the setup in a unified manner independently on underlying components used. Automated test runs are carried out, visualized, saved and repeated under desired conditions using command interpreter and batches of commands. Proposed software framework divides image processing into sequential modular blocks convenient for pipeline execution. Light pattern extraction block enables camera focus and laser source modulation feedback to be incorporated into image processing algorithms under test.
As an important teaching carrier,modern remote education network has the advantage of being unlimited by time,but it is always the problem of network education work about how to convert network information into personal knowledge and experience effectively,therefore,discussing network learning strategy of the adult learner can help them to improve the learning effect,promote the design and construction of network course,improve the quality of network education.
advances in deep learning and sensory technology (*e.g.*, RGB cameras, LiDAR, radar, stereo, RGB-D, among others ) have made remarkable contributions to perception systems applied to autonomous driving . Perception systems include, but are not limited to, image and point cloud-based classification and detection , semantic segmentation , and tracking . Oftentimes, regardless of the type of network architecture or input modalities, most state-of-the-art CNN-based object recognition algorithms output normalized prediction scores via the SoftMax layer  *i.e.*, the prediction values are in a range of $[0, 1]$, as shown in Fig. . Furthermore, such algorithms are often implemented through deterministic neural networks, and the prediction itself does not consider the model's actual confidence for the predicted class in decision-making . In fact, in most cases, the decision-making takes into account only the prediction value provided directly by a deep learning algorithm disregarding a proper level of confidence of the prediction (which is unavailable for most networks). Therefore, evaluating the prediction confidence or uncertainty is crucial in decision-making because an erroneous decision can lead to disaster, especially in autonomous driving where the safety of human lives are dependent on the automation algorithms. Many works have pointed out SoftMax function overconfidence as an open issue in the field of deep learning . Two main techniques have been suggested to mitigate the overconfidence in deep networks, calibration  and regularization . Often, calibrations are defined as techniques that act directly on the resulting output of the network, while regularization are techniques that aims to penalize network weights through a variety of methods, which adds parameters or terms directly to the network cost/loss function . However, the paper proposed by  defines regularization techniques as a type of calibration. Consequently, the latter demands that the network must be retrained. The overconfidence problem is more evident in complex networks such as Multi Layer Perceptrons (MLPs) and Convolutional Neural Networks (CNNs), particularly when using the SoftMax function as the prediction layer thus, generating ill-distributed outputs *i.e.*, values very close to zero or one  which can be observed in Fig. and Fig. . We note that this is desirable when the true positives have higher-scores. However, the counterpart problem is that 'overconfident networks' also generate high-score values for the objects erroneously detected or classified *i.e.*, false positives. Given this problem, a question that arises, *how can we guarantee prediction values that are 'high' for true positives and, at the same time, 'low' for false positives?* This question can be answered by analyzing the output of the network's Logit layer (linear activation function), which provides a smoother output than the SoftMax layer, as can be observed from Figs. and . Following this, we can put the question as follows: *although normalized outputs aim to guarantee a "probabilistic interpretation", how reliable are these predictions? Additionally, given an object belonging to a non-trained/unseen class (*e.g.*, an unexpected object on the road), how confident is the model prediction?* These are the key research questions explored in this work by considering the importance of having models grounded on proper probability assumptions to enable adequate interpretation of the outputs, ultimately leading to more reliable predictions and decisions. In terms of contributions, this paper introduces the Maximum Likelihood (ML) and Maximum a-Posteriori (MAP) functions for deep neural networks, which provide a more adequate solution compared to state-of-the-art (SoftMax or Sigmoid) prediction layers. Both ML and MAP functions compute a single estimate, rather than a distribution. Moreover, this work contributes towards the advances of multi-sensor perception (RGB and LiDAR modalities) for autonomous perception systems  by proposing a probability-grounded solution that is practical in the sense it can be used in existing state-of-the-art models such as Yolo . It is important to emphasize that there is no need to retrain the neural networks when the approach described in this article is employed, because the ML and MAP functions produce outputs based on PDFs obtained from the Logit layer of already trained networks. Therefore, instead of using the traditional prediction layers (Softmax or Sigmoid) to predict the object scores on a test set, the ML and MAP functions can be used to make the predictions for the objects scores. Thus, the proposed technique in this paper is practical given that a network has already been trained with SoftMax or Sigmoid prediction layers. In other words, the ML and MAP functions depend on the Logit's outputs of the already trained network[^6] In summary, the scientific contributions arising from this work are as follows: - An investigation of the distribution of predicted values of the Logit and SoftMax layers, for both calibrated and non-calibrated networks; - An analysis of the predicted probabilities inferred by the proposed ML and MAP formulations, both for object classification and detection; - An investigation of the predicted score values on out-of-training distribution test data (unseen/non-trained class); - The proposed approach does not require the retraining of networks; - Experimental validation of the proposed methodology through different modalities, RGB and Range-View (3D point clouds-LiDAR), for classification (using InceptionV3) and object detection (using YoloV4). In this paper, we report on object recognition results showing that the SoftMax and Sigmoid prediction layers do indeed sometimes induce erroneous decision-making, which can be critical in autonomous driving. This is particularly evident when "unseen" samples *i.e.*, out-of-training distribution test data are presented to the network. On the other hand, the approach described here is able to mitigate such problems during the testing stage (prediction). The rest of this article is structured as follows. The related work is presented in Section , while the proposed methodology is developed in Section . The experimental part and the results are reported in Section and finally the conclusion is given in Section .
The subject of this paper is motivated by recent work on Fourier uniqueness and non-uniqueness pairs. Broadly speaking, we are interested in the following general question. Given a space $V$ of continuous integrable functions on $\mathbb{R}^n$ and two subsets $A, B \subset \mathbb{R}^n$, when is it possible to recover any function $f \in V$ from the restrictions $f|_A$ and $\widehat{f}|_B$ (where $\widehat{f}$ denotes the Fourier transform of $f$)? In other words, we are interested in conditions on $A,B,V$, under which the restriction map $f \mapsto (f|_A, \widehat{f}|_B)$ is injective. When the map is injective, we say that $(A, B)$ is a (Fourier) uniqueness pair and if $A = B$, we simply say that $A$ is a (Fourier) uniqueness set. Conversely, if the map is not injective, we call $(A,B)$ a non-uniqueness pair, and when $A=B$ we call $A$ a non-uniqueness set. Naturally, one would like the function space $V$ to be as large as possible and the sets $A$ and $B$ to be as small as possible, or "minimal" in a certain sense. A prototypical example of a minimal Fourier uniqueness set was found by Radchenko and Viazovska in , where they proved that, when $V = \mathcal{S}_{\text{even}}(\mathbb{R})$ is the space of even Schwartz functions on the real line, the set $A = \sqrt{\mathbb{Z}_{+}} := \{ \sqrt{n}\,:\, n \in \mathbb{Z}_{\geq 0}\}$ is a uniqueness set and established an interpolation theorem in this setting. The result is sharp in the sense that no proper subset of $A$ remains a uniqueness set for $\mathcal{S}_{\text{even}}(\mathbb{R})$. Their proof was based on the theory of classical modular forms, which is also well-suited to treat the case $V = \mathcal{S}_{\text{rad}}(\mathbb{R}^n)$ of *radial* Schwartz functions on $\mathbb{R}^n$ and the set $A =U_n := \cup_{m \in \mathbb{N}_0}{\sqrt{m}S^{n-1}}$. For the latter generalization, we refer to §2 in , which deduces the result from . The second author recently proved an interpolation formula generalizing the one by Radchenko–Viazovska also to non-radial functions, that is, to the space $V = \mathcal{S}(\mathbb{R}^n)$ and the same set of concentric spheres $U_n$. However, for $n>1$, it is no longer minimal. Indeed, the (related) interpolation formula in  implies that the space of $f \in \mathcal{S}(\mathbb{R}^n)$ satisfying $f(x)=\widehat{f}(x)=0$ for all $x\in \cup_{m \ge N}{\sqrt{m}S^{n-1}}$ is finite-dimensional for all $N$ and is in fact contained in $\mathcal{H}_{4N+2}\otimes W$ for some finite-dimensional space $W\subset \mathcal{S}_{\text{rad}}(\mathbb{R}^n)$, where $\mathcal{H}_k$ denotes the space of harmonic polynomials on $\mathbb{R}^n$ of degree $\le k$. Since a generic subset of $\dim \mathcal{H}_k$ points in $rS^{n-1}$ is an interpolation set for the space $\mathcal{H}_k$ (in the sense that any polynomial $p\in \mathcal{H}_k$ is uniquely determined by its values on $\dim \mathcal{H}_k$ generic points), this implies that there is a uniqueness set properly contained in $U_n$ that contains only finitely many points on spheres with radius $\le \sqrt{N}$. In fact, it was recently proved by the second author and Ramos in that *any* discrete and sufficiently uniformly distributed subset $D \subset U_n$ remains a uniqueness set for $\mathcal{S}(\mathbb{R}^n)$. Here, "sufficiently" means that $D \cap \sqrt{m}S^{n-1}$ contains at least $C_n m^{c_n m}$ many points. We contrast these Fourier uniqueness results by providing two families of discrete *non*-uniqueness sets in $\mathbb{R}^n$, where one of them is again contained $U_n$, while the other lies in a union of ellipsoids. Both of them are constructed from lattices corresponding to ideals in totally real number fields $K/\mathbb{Q}$ of degree $n$ and their density grows with the discriminant of $K$ (although their distribution is not uniform, in the sense that they "avoid" points near the coordinate axes; see Figure ). We give the precise formulations in the next subsection. Thus, characterizing the *discrete* Fourier uniqueness sets contained in $U_n$ seems to be a subtle question. In fact, the motivation for this work was not to find negative results in this direction, but to try to generalize the modular form theoretic approach of Radchenko and Viazovska to treat not necessarily radial functions on $\mathbb{R}^n$, in a way that is very different from the approach taken by Stoller (who essentially reduces the problem again to the case of radial Schwartz functions). More specifically, we were interested in (possible) interpolation formulas where we replace the set of nodes $A=\sqrt{\mathbb{Z}_{+}}$ by "square roots" of certain lattices coming from totally real number fields $K$, specifically, the co-different $\mathcal{O}_{K}^{\vee}$ of their ring of integers $\mathcal{O}_{K}$. In this set up, it seemed natural to ask whether one could be working with Hilbert modular forms and associated integral transforms, similarly to the proof by Radchenko–Viazovska. As we will explain more in § and briefly in §, there is an obstruction to the existence of such interpolation formulas. From the more general point of view taken in §, the obstruction arises because, for $n \geq 2$, subgroups of $\mathop{\mathrm{PSL}}_2(\mathbb{R})^n$ that are commensurable to the Hilbert modular group $\mathop{\mathrm{PSL}}_2(\mathcal{O}_K)$ are irreducible lattices and can therefore never contain subgroups of finite index with infinite abelianization, by Margulis' normal subgroup theorem. On the other hand the presence of certain unfavorable relations in the Hilbert modular group can be exploited in an explicit manner to obtain the non-uniqueness sets indicated in the abstract.
In contrast to traditional clients /servers infrastructure,distributed VoD systems have better performance via taking advantage of the cooperation among nodes.Cache management is one of the most important issues in the distributed system because of the limited storage capacity and netw ork bandw idth.In the paper a delamination-based caching policy is analyzed based on the flexible popularity estimate.A formal optimal formula,w hich can reduce disk I / O w ith the caching policy of random access memory,is proposed,and the corresponding suboptimal solution is given.At last a data replace algorithm based on quadratic programming is designed for client nodes w hich are unstable in the netw ork.The simulation results illustrate that clients' behavior is tracked better w ith the popularity,the disk I / O is low ered and effectiveness of system cache is boosted by engaging our proposals.
should be both corrected as positive definite polynomials s9,i, j(x) and s10,i, j(x) ∈ ∑ or nonzero polynomials s9,i, j(x) and s10,i, j(x) ∈ ∑ with degree 0 (ie, positive constant polynomials s9,i, j(x) and s10,i, j(x) ∈ ∑ ). Based on the aforementioned corrections, we can partially reobtain results for the case d = 2 in Example 1 by using nonzero polynomials s9,i, j(x) and s10,i, j(x) ∈ ∑ with degree 0 as follows: Algorithm 2 returns an inner estimate DM (2) after iterating 13 steps within 220.357 seconds, whose boundary is depicted by red color in Figure 1; Algorithm 3 returns an inner estimate D BM (2) after iterating 2 steps within 326.797 seconds, whose boundary is depicted by blue color in Figure 1. Note that, in the original paper,1 except the case d = 2 in Example 1, s9,i, j(x) and s10,i, j(x) were both preassumed to be nonzero polynomials in ∑ with degree 0 for all other examples when applying SOSTOOLS for implementation. Additionally, since SSi,j ⊂ Si,j , the transformations for constraints (6) and (7) in both Sections 4.1 and 5.2 are all underapproximative instead of equivalent. Finally, the authors deeply thank Mr Torbjørn Cunis for comments on the mistake caused by s9,i, j(x) and s10,i, j(x) ∈ ∑ . The authors also apologize for the mistakes.
What should a researcher do when statistical analysis software terminates before completion with a message that the Hessian is not invertible? The standard textbook advice is to respecify the model, but this is another way of saying that the researcher should change the question being asked. Obviously, however, computer programs should not be in the business of deciding what questions are worthy of study. Although noninvertable Hessians are sometimes signals of poorly posed questions, nonsensical models, or inappropriate estimators, they also frequently occur when information about the quantities of interest exists in the data through the likelihood function. The authors explain the problem in some detail and lay out two preliminary proposals for ways of dealing with noninvertable Hessians without changing the question asked.
We embed an untyped security protocol model in the interactive theorem prover Isabelle/HOL and derive a theory for constructing proofs of secrecy and authentication properties. Our theory is based on two key ingredients. The first is an inference rule for enumerating the possible origins of messages known to the intruder. The second is a class of protocol-specific invariants that formalize type assertions about variables in protocol specifications. The resulting theory is well suited for interactively constructing human-readable, protocol security proofs. We additionally give an algorithm that automatically generates Isabelle/HOL proof scripts based on this theory. We provide case studies showing that both interactive and automatic proof construction are efficient. The resulting proofs provide strong correctness guarantees since all proofs, including those deriving our theory from the security protocol model, are machine-checked.
The control algorithm and simulation model of single neuron self-adaptive PID are set forth.Self learning and self-adaptive ability of Single Neuron PID Controller can realize its parameters adjust by self.These results from simulation researches show that the controlling method is more self-adaptive and robust than conventional PID one.
The well stirred model of a chemical system as a continuous time Markov process with state space ${\mathbb Z}_+^N$ has been known for several decades . Exact simulation of sample paths of such processes is very simple and is commonly known as the SSA (abbreviation for Stochastic Simulation Algorithm) or the Gillespie algorithm . Stochastic chemical models have become important in applications in intracellular mechanisms and these models often possess some species in small molecular copy numbers as well as a range of time scales in addition to nonlinear propensity functions. Hence approximations of the whole system by ordinary differential equations (ODEs) or even stochastic differential equations (SDEs) driven by Brownian motion is often not valid. On the other hand the SSA is often prohibitively expensive. Tau leaping methods were proposed as efficient but approximate alternatives to the SSA simulations. While the exact simulation (SSA) accounts for reaction events one at a time, the tau leap methods take a predetermined time step and then provide an approximation of the random state at the end of the time step using some criterion. Thus tau leap simulation of sample paths are akin to time stepping methods for ordinary differential equations (ODEs) and stochastic differential equations (SDEs) driven by Brownian motion. The first tau leap method was proposed by Gillespie and is now known as the explicit tau leap method. This is in spirit the same as the explicit Euler method for ODEs. The implicit tau leap method was introduced in and the trapezoidal tau leap method may be found in . Several other tau leap methods have been proposed in the literature since then, see for instance and references therein.
This paper describes the research about Web data mining using Natural Language Processing. System accepts arbitrary data as input from Web document and then extracts information from the document. A new method to implement Web data mining is proposed in this paper. There are three steps in this system. First, the Web document will be decomposed to paragraph, sentence and phrase level. Second, extract information from all sentences. Finally, add the information to the knowledge model. The methods used have proved to be efficient for Web data mining with the experimental corpus.
Lattice Boltzmann schemes without coordinates. We discuss recent developments extending the scope of the lattice Boltzmann method to unstructured (coordinateless) grids with arbitrary connectivity. Besides their intrinsic interest as examples of discrete kinetic systems living in irregular phase-space, the above extensions bear a direct relevance as computational tools for multi-scale applications.
A panoptic camera allows to acquire a full 360 degree view of a scene by stitching images from a set of cameras [1, 3, 2, 5]. The images from the cameras are acquired at different positions and orientations to form a full view image. Stitching is performed by mixing and transforming the different images according to non-linear mapping equations. There are two kinds of stitching algorithms:
In this paper, the complete multiple reciprocity method is adopted to solve the one-dimensional (1D) Helmholtz equation for the semiin®nite domain. In order to recover the information that is missing when the conventional multiple reciprocity method is used, an appropriate complex number in the zeroth order fundamental solution is added such that the kernels derived using this proposed method are fully equivalent to those derived using the complex-valued formulation. Two examples including the Dirichlet and Neumann boundary conditions are investigated to show the validity of the proposed method analytically and numerically. The numerical results show good agreement with the analytical solutions. q 2001 Elsevier Science Ltd. All rights reserved.
End of preview. Expand in Data Studio

Dataset Card for Filter-Cyber

Synthetic forget set for LLM Unlearning Without an Expert Curated Dataset. Please see details in our Github repo.

Citation

If you find this useful in your research, please consider citing our paper:

@misc{zhu2025llmunlearningexpertcurated,
      title={LLM Unlearning Without an Expert Curated Dataset}, 
      author={Xiaoyuan Zhu and Muru Zhang and Ollie Liu and Robin Jia and Willie Neiswanger},
      year={2025},
      eprint={2508.06595},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2508.06595}, 
}
Downloads last month
17

Collection including WhyTheMoon/filter_cyber