- Iterated integral and the loop product In this article we discuss a relation between the string topology and differential forms based on the theory of Chen's iterated integrals and the cyclic bar complex. 1 authors · Apr 1, 2007
- Jets of foliations and $b^k$-algebroids In this article, we introduce and study singular foliations of b^k-type. These singular foliations formalize the properties of vector fields that are tangent to order k along a submanifold W subset M. Our first result is a classification of these foliations, relating them to geometric structures defined in a formal neighborhood of the submanifold, such as jets of distributions that are involutive up to order k-1. When W is a hypersurface, singular foliations of b^k-type are Lie algebroids. In this particular case, they are generalizations of the b^k-tangent bundles introduced by Scott. Indeed, they are always locally isomorphic to b^k-tangent bundles, but globally such an isomorphism is obstructed by a holonomy invariant. Our second main result is a Riemann-Hilbert-style classification of singular foliations of b^k-type in terms of holonomy representations. In this paper, we study singular foliations of b^k-type from several different perspectives. In particular: (1) We study the problem of extending a k-th-order foliation to a (k+1)-th order foliation and prove that this is obstructed by a characteristic class. (2) When W is a hypersurface, we give a detailed study of algebroid differential forms and extend Scott's calculation of the cohomology. (3) We study algebroid symplectic forms in terms of the geometric structures induced on W. In particular, we find that there is a close relationship between the above obstruction class for extensions and the symplectic variation of the symplectic foliation induced on W. 3 authors · Nov 28, 2023
1 Positive Geometries and Canonical Forms Recent years have seen a surprising connection between the physics of scattering amplitudes and a class of mathematical objects--the positive Grassmannian, positive loop Grassmannians, tree and loop Amplituhedra--which have been loosely referred to as "positive geometries". The connection between the geometry and physics is provided by a unique differential form canonically determined by the property of having logarithmic singularities (only) on all the boundaries of the space, with residues on each boundary given by the canonical form on that boundary. In this paper we initiate an exploration of "positive geometries" and "canonical forms" as objects of study in their own right in a more general mathematical setting. We give a precise definition of positive geometries and canonical forms, introduce general methods for finding forms for more complicated positive geometries from simpler ones, and present numerous examples of positive geometries in projective spaces, Grassmannians, and toric, cluster and flag varieties. We also illustrate a number of strategies for computing canonical forms which yield interesting representations for the forms associated with wide classes of positive geometries, ranging from the simplest Amplituhedra to new expressions for the volume of arbitrary convex polytopes. 3 authors · Mar 13, 2017
- Explicit gate construction of block-encoding for Hamiltonians needed for simulating partial differential equations Quantum computation is an emerging technology with important potential for solving certain problems pivotal in various scientific and engineering disciplines. This paper introduces an efficient quantum protocol for the explicit construction of the block-encoding for an important class of Hamiltonians. Using the Schrodingerisation technique -- which converts non-conservative PDEs into conservative ones -- this particular class of Hamiltonians is shown to be sufficient for simulating any linear partial differential equations that have coefficients which are polynomial functions. The class of Hamiltonians consist of discretisations of polynomial products and sums of position and momentum operators. This construction is explicit and leverages minimal one- and two-qubit operations. The explicit construction of this block-encoding forms a fundamental building block for constructing the unitary evolution operator for this Hamiltonian. The proposed algorithm exhibits polynomial scaling with respect to the spatial partitioning size, suggesting an exponential speedup over classical finite-difference methods. This work provides an important foundation for building explicit and efficient quantum circuits for solving partial differential equations. 3 authors · May 21, 2024
- Learning Feynman integrals from differential equations with neural networks We present a new approach for evaluating Feynman integrals numerically. We apply the recently-proposed framework of physics-informed deep learning to train neural networks to approximate the solution to the differential equations satisfied by the Feynman integrals. This approach relies neither on a canonical form of the differential equations, which is often a bottleneck for the analytical techniques, nor on the availability of a large dataset, and after training yields essentially instantaneous evaluation times. We provide a proof-of-concept implementation within the PyTorch framework, and apply it to a number of one- and two-loop examples, achieving a mean magnitude of relative difference of around 1% at two loops in the physical phase space with network training times on the order of an hour on a laptop GPU. 3 authors · Dec 4, 2023
- FAME: Adaptive Functional Attention with Expert Routing for Function-on-Function Regression Functional data play a pivotal role across science and engineering, yet their infinite-dimensional nature makes representation learning challenging. Conventional statistical models depend on pre-chosen basis expansions or kernels, limiting the flexibility of data-driven discovery, while many deep-learning pipelines treat functions as fixed-grid vectors, ignoring inherent continuity. In this paper, we introduce Functional Attention with a Mixture-of-Experts (FAME), an end-to-end, fully data-driven framework for function-on-function regression. FAME forms continuous attention by coupling a bidirectional neural controlled differential equation with MoE-driven vector fields to capture intra-functional continuity, and further fuses change to inter-functional dependencies via multi-head cross attention. Extensive experiments on synthetic and real-world functional-regression benchmarks show that FAME achieves state-of-the-art accuracy, strong robustness to arbitrarily sampled discrete observations of functions. 3 authors · Oct 1
45 TPDiff: Temporal Pyramid Video Diffusion Model The development of video diffusion models unveils a significant challenge: the substantial computational demands. To mitigate this challenge, we note that the reverse process of diffusion exhibits an inherent entropy-reducing nature. Given the inter-frame redundancy in video modality, maintaining full frame rates in high-entropy stages is unnecessary. Based on this insight, we propose TPDiff, a unified framework to enhance training and inference efficiency. By dividing diffusion into several stages, our framework progressively increases frame rate along the diffusion process with only the last stage operating on full frame rate, thereby optimizing computational efficiency. To train the multi-stage diffusion model, we introduce a dedicated training framework: stage-wise diffusion. By solving the partitioned probability flow ordinary differential equations (ODE) of diffusion under aligned data and noise, our training strategy is applicable to various diffusion forms and further enhances training efficiency. Comprehensive experimental evaluations validate the generality of our method, demonstrating 50% reduction in training cost and 1.5x improvement in inference efficiency. 2 authors · Mar 12 3
- Many Ways to Be Lonely: Fine-Grained Characterization of Loneliness and Its Potential Changes in COVID-19 Loneliness has been associated with negative outcomes for physical and mental health. Understanding how people express and cope with various forms of loneliness is critical for early screening and targeted interventions to reduce loneliness, particularly among vulnerable groups such as young adults. To examine how different forms of loneliness and coping strategies manifest in loneliness self-disclosure, we built a dataset, FIG-Loneliness (FIne-Grained Loneliness) by using Reddit posts in two young adult-focused forums and two loneliness related forums consisting of a diverse age group. We provided annotations by trained human annotators for binary and fine-grained loneliness classifications of the posts. Trained on FIG-Loneliness, two BERT-based models were used to understand loneliness forms and authors' coping strategies in these forums. Our binary loneliness classification achieved an accuracy above 97%, and fine-grained loneliness category classification reached an average accuracy of 77% across all labeled categories. With FIG-Loneliness and model predictions, we found that loneliness expressions in the young adults related forums were distinct from other forums. Those in young adult-focused forums were more likely to express concerns pertaining to peer relationship, and were potentially more sensitive to geographical isolation impacted by the COVID-19 pandemic lockdown. Also, we showed that different forms of loneliness have differential use in coping strategies. 4 authors · Jan 19, 2022
- Differentially Private Sequential Learning In a differentially private sequential learning setting, agents introduce endogenous noise into their actions to maintain privacy. Applying this to a standard sequential learning model leads to different outcomes for continuous vs. binary signals. For continuous signals with a nonzero privacy budget, we introduce a novel smoothed randomized response mechanism that adapts noise based on distance to a threshold, unlike traditional randomized response, which applies uniform noise. This enables agents' actions to better reflect both private signals and observed history, accelerating asymptotic learning speed to Theta_{epsilon}(log(n)), compared to Theta(log(n)) in the non-private regime where privacy budget is infinite. Moreover, in the non-private setting, the expected stopping time for the first correct decision and the number of incorrect actions diverge, meaning early agents may make mistakes for an unreasonably long period. In contrast, under a finite privacy budget epsilon in (0,1), both remain finite, highlighting a stark contrast between private and non-private learning. Learning with continuous signals in the private regime is more efficient, as smooth randomized response enhances the log-likelihood ratio over time, improving information aggregation. Conversely, for binary signals, differential privacy noise hinders learning, as agents tend to use a constant randomized response strategy before an information cascade forms, reducing action informativeness and hampering the overall process. 2 authors · Feb 26
- SplitMeanFlow: Interval Splitting Consistency in Few-Step Generative Modeling Generative models like Flow Matching have achieved state-of-the-art performance but are often hindered by a computationally expensive iterative sampling process. To address this, recent work has focused on few-step or one-step generation by learning the average velocity field, which directly maps noise to data. MeanFlow, a leading method in this area, learns this field by enforcing a differential identity that connects the average and instantaneous velocities. In this work, we argue that this differential formulation is a limiting special case of a more fundamental principle. We return to the first principles of average velocity and leverage the additivity property of definite integrals. This leads us to derive a novel, purely algebraic identity we term Interval Splitting Consistency. This identity establishes a self-referential relationship for the average velocity field across different time intervals without resorting to any differential operators. Based on this principle, we introduce SplitMeanFlow, a new training framework that enforces this algebraic consistency directly as a learning objective. We formally prove that the differential identity at the core of MeanFlow is recovered by taking the limit of our algebraic consistency as the interval split becomes infinitesimal. This establishes SplitMeanFlow as a direct and more general foundation for learning average velocity fields. From a practical standpoint, our algebraic approach is significantly more efficient, as it eliminates the need for JVP computations, resulting in simpler implementation, more stable training, and broader hardware compatibility. One-step and two-step SplitMeanFlow models have been successfully deployed in large-scale speech synthesis products (such as Doubao), achieving speedups of 20x. 11 authors · Jul 22
- A short note on the decision tree based neural turing machine Turing machine and decision tree have developed independently for a long time. With the recent development of differentiable models, there is an intersection between them. Neural turing machine(NTM) opens door for the memory network. It use differentiable attention mechanism to read/write external memory bank. Differentiable forest brings differentiable properties to classical decision tree. In this short note, we show the deep connection between these two models. That is: differentiable forest is a special case of NTM. Differentiable forest is actually decision tree based neural turing machine. Based on this deep connection, we propose a response augmented differential forest (RaDF). The controller of RaDF is differentiable forest, the external memory of RaDF are response vectors which would be read/write by leaf nodes. 1 authors · Oct 26, 2020
- Kolmogorov Arnold Informed neural network: A physics-informed deep learning framework for solving PDEs based on Kolmogorov Arnold Networks AI for partial differential equations (PDEs) has garnered significant attention, particularly with the emergence of Physics-informed neural networks (PINNs). The recent advent of Kolmogorov-Arnold Network (KAN) indicates that there is potential to revisit and enhance the previously MLP-based PINNs. Compared to MLPs, KANs offer interpretability and require fewer parameters. PDEs can be described in various forms, such as strong form, energy form, and inverse form. While mathematically equivalent, these forms are not computationally equivalent, making the exploration of different PDE formulations significant in computational physics. Thus, we propose different PDE forms based on KAN instead of MLP, termed Kolmogorov-Arnold-Informed Neural Network (KINN). We systematically compare MLP and KAN in various numerical examples of PDEs, including multi-scale, singularity, stress concentration, nonlinear hyperelasticity, heterogeneous, and complex geometry problems. Our results demonstrate that KINN significantly outperforms MLP in terms of accuracy and convergence speed for numerous PDEs in computational solid mechanics, except for the complex geometry problem. This highlights KINN's potential for more efficient and accurate PDE solutions in AI for PDEs. 8 authors · Jun 16, 2024
1 On Neural Differential Equations The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art. 1 authors · Feb 4, 2022
- Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation We study the problem of in-context learning (ICL) with large language models (LLMs) on private datasets. This scenario poses privacy risks, as LLMs may leak or regurgitate the private examples demonstrated in the prompt. We propose a novel algorithm that generates synthetic few-shot demonstrations from the private dataset with formal differential privacy (DP) guarantees, and show empirically that it can achieve effective ICL. We conduct extensive experiments on standard benchmarks and compare our algorithm with non-private ICL and zero-shot solutions. Our results demonstrate that our algorithm can achieve competitive performance with strong privacy levels. These results open up new possibilities for ICL with privacy protection for a broad range of applications. 9 authors · Sep 20, 2023
1 Matrix Calculus (for Machine Learning and Beyond) This course, intended for undergraduates familiar with elementary calculus and linear algebra, introduces the extension of differential calculus to functions on more general vector spaces, such as functions that take as input a matrix and return a matrix inverse or factorization, derivatives of ODE solutions, and even stochastic derivatives of random functions. It emphasizes practical computational applications, such as large-scale optimization and machine learning, where derivatives must be re-imagined in order to be propagated through complicated calculations. The class also discusses efficiency concerns leading to "adjoint" or "reverse-mode" differentiation (a.k.a. "backpropagation"), and gives a gentle introduction to modern automatic differentiation (AD) techniques. 3 authors · Jan 7 1
- Compact Einstein-type manifolds with parallel Ricci tensor In this paper, we deduce a Bochner-type identity for compact gradient Einstein-type manifolds with boundary. As consequence, we are able to show a rigidity result for Einstein-type manifolds assuming the parallel Ricci curvature condition. Moreover, we provide a condition on the norm of the gradient of the potential function in order to classify such structures. 3 authors · Mar 4, 2024
2 Manifold Diffusion Fields We present Manifold Diffusion Fields (MDF), an approach to learn generative models of continuous functions defined over Riemannian manifolds. Leveraging insights from spectral geometry analysis, we define an intrinsic coordinate system on the manifold via the eigen-functions of the Laplace-Beltrami Operator. MDF represents functions using an explicit parametrization formed by a set of multiple input-output pairs. Our approach allows to sample continuous functions on manifolds and is invariant with respect to rigid and isometric transformations of the manifold. Empirical results on several datasets and manifolds show that MDF can capture distributions of such functions with better diversity and fidelity than previous approaches. 3 authors · May 24, 2023
- ODEFormer: Symbolic Regression of Dynamical Systems with Transformers We introduce ODEFormer, the first transformer able to infer multidimensional ordinary differential equation (ODE) systems in symbolic form from the observation of a single solution trajectory. We perform extensive evaluations on two datasets: (i) the existing "Strogatz" dataset featuring two-dimensional systems; (ii) ODEBench, a collection of one- to four-dimensional systems that we carefully curated from the literature to provide a more holistic benchmark. ODEFormer consistently outperforms existing methods while displaying substantially improved robustness to noisy and irregularly sampled observations, as well as faster inference. We release our code, model and benchmark dataset publicly. 5 authors · Oct 9, 2023
1 PROSE: Predicting Operators and Symbolic Expressions using Multimodal Transformers Approximating nonlinear differential equations using a neural network provides a robust and efficient tool for various scientific computing tasks, including real-time predictions, inverse problems, optimal controls, and surrogate modeling. Previous works have focused on embedding dynamical systems into networks through two approaches: learning a single solution operator (i.e., the mapping from input parametrized functions to solutions) or learning the governing system of equations (i.e., the constitutive model relative to the state variables). Both of these approaches yield different representations for the same underlying data or function. Additionally, observing that families of differential equations often share key characteristics, we seek one network representation across a wide range of equations. Our method, called Predicting Operators and Symbolic Expressions (PROSE), learns maps from multimodal inputs to multimodal outputs, capable of generating both numerical predictions and mathematical equations. By using a transformer structure and a feature fusion approach, our network can simultaneously embed sets of solution operators for various parametric differential equations using a single trained network. Detailed experiments demonstrate that the network benefits from its multimodal nature, resulting in improved prediction accuracy and better generalization. The network is shown to be able to handle noise in the data and errors in the symbolic representation, including noisy numerical values, model misspecification, and erroneous addition or deletion of terms. PROSE provides a new neural network framework for differential equations which allows for more flexibility and generality in learning operators and governing equations from data. 3 authors · Sep 28, 2023
- Morse theory and Seiberg-Witten moduli spaces of 3-dimensional cobordisms, I Motivated by a variant of Atiyah-Floer conjecture proposed in L2 and its potential generalizations, we study in this article and its sequel as a first step properties of moduli spaces of Seiberg-Witten equations on a 3-dimensional cobordism with cylindrical ends (CCE) \(Y\), perturbed by closed 2-forms of the form \(r*d\ff+w\), where \(r\geq 1\), where \(\ff\) is a harmonic Morse function with certain linear growth at the ends of \(Y\), and \(w\) is a certain closed 2-form. 1 authors · Dec 29, 2024
- High N/O ratio at high redshift as a result of a strong burst of star formation and differential galactic winds Recent observations by JWST have revealed supersolar ^{14}N abundances in galaxies at very high redshift. On the other hand, these galaxies show subsolar metallicity. The observed N/O ratios are difficult to reproduce in the framework of chemical evolution models for the Milky Way. Our aim is to reproduce these high N/O ratios with chemical evolution models assuming different histories of star formation triggering galactic winds coupled with detailed nucleosynthesis prescriptions for ^{14}N, ^{12}C, ^{16}O and ^{56}Fe. We compute several models for small galaxies (10^{9} - 10^{10} M_{odot}) with high star formation efficiency and strong galactic winds. These winds are assumed to be differential, carrying out mainly the products of the explosion of core-collapse supernovae. We find that only models with high star formation rates, normal initial mass function, and differential galactic winds can reproduce the observed chemical abundances. We also find that with the same assumptions about star formation and galactic winds, but with a very rapid formation resulting from fast gas infall, we can also reproduce the estimated ages of these objects. We find no necessity to invoke peculiar nucleosynthesis from Population III stars, very massive stars and supermassive stars. 5 authors · Dec 6, 2024
1 Geometry on the Wasserstein space over a compact Riemannian manifold We will revisit the intrinsic differential geometry of the Wasserstein space over a Riemannian manifold, due to a series of papers by Otto, Villani, Lott, Ambrosio, Gigli, Savar\'e and so on. 2 authors · Apr 2, 2021
- Reverse derivative categories The reverse derivative is a fundamental operation in machine learning and automatic differentiation. This paper gives a direct axiomatization of a category with a reverse derivative operation, in a similar style to that given by Cartesian differential categories for a forward derivative. Intriguingly, a category with a reverse derivative also has a forward derivative, but the converse is not true. In fact, we show explicitly what a forward derivative is missing: a reverse derivative is equivalent to a forward derivative with a dagger structure on its subcategory of linear maps. Furthermore, we show that these linear maps form an additively enriched category with dagger biproducts. 7 authors · Oct 15, 2019
- ConDiff: A Challenging Dataset for Neural Solvers of Partial Differential Equations We present ConDiff, a novel dataset for scientific machine learning. ConDiff focuses on the parametric diffusion equation with space dependent coefficients, a fundamental problem in many applications of partial differential equations (PDEs). The main novelty of the proposed dataset is that we consider discontinuous coefficients with high contrast. These coefficient functions are sampled from a selected set of distributions. This class of problems is not only of great academic interest, but is also the basis for describing various environmental and industrial problems. In this way, ConDiff shortens the gap with real-world problems while remaining fully synthetic and easy to use. ConDiff consists of a diverse set of diffusion equations with coefficients covering a wide range of contrast levels and heterogeneity with a measurable complexity metric for clearer comparison between different coefficient functions. We baseline ConDiff on standard deep learning models in the field of scientific machine learning. By providing a large number of problem instances, each with its own coefficient function and right-hand side, we hope to encourage the development of novel physics-based deep learning approaches, such as neural operators, ultimately driving progress towards more accurate and efficient solutions of complex PDE problems. 6 authors · Jun 7, 2024
- Einstein-Maxwell-Dilaton theories with a Liouville potential We find and analyse solutions of Einstein's equations in arbitrary d dimensions and in the presence of a scalar field with a Liouville potential coupled to a Maxwell field. We consider spacetimes of cylindrical symmetry or again subspaces of dimension d-2 with constant curvature and analyse in detail the field equations and manifest their symmetries. The field equations of the full system are shown to reduce to a single or couple of ODE's which can be used to solve analytically or numerically the theory for the symmetry at hand. Further solutions can also be generated by a solution generating technique akin to the EM duality in the absence of a cosmological constant. We then find and analyse explicit solutions including black holes and gravitating solitons for the case of four dimensional relativity and the higher-dimensional oxydised 5-dimensional spacetime. The general solution is obtained for a certain relation between couplings in the case of cylindrical symmetry. 3 authors · May 20, 2009
1 MeshSDF: Differentiable Iso-Surface Extraction Geometric Deep Learning has recently made striking progress with the advent of continuous Deep Implicit Fields. They allow for detailed modeling of watertight surfaces of arbitrary topology while not relying on a 3D Euclidean grid, resulting in a learnable parameterization that is not limited in resolution. Unfortunately, these methods are often not suitable for applications that require an explicit mesh-based surface representation because converting an implicit field to such a representation relies on the Marching Cubes algorithm, which cannot be differentiated with respect to the underlying implicit field. In this work, we remove this limitation and introduce a differentiable way to produce explicit surface mesh representations from Deep Signed Distance Functions. Our key insight is that by reasoning on how implicit field perturbations impact local surface geometry, one can ultimately differentiate the 3D location of surface samples with respect to the underlying deep implicit field. We exploit this to define MeshSDF, an end-to-end differentiable mesh representation which can vary its topology. We use two different applications to validate our theoretical insight: Single-View Reconstruction via Differentiable Rendering and Physically-Driven Shape Optimization. In both cases our differentiable parameterization gives us an edge over state-of-the-art algorithms. 7 authors · Jun 6, 2020
- Convergence of (generalized) power series solutions of functional equations Solutions of nonlinear functional equations are generally not expressed as a finite number of combinations and compositions of elementary and known special functions. One of the approaches to study them is, firstly, to find formal solutions (that is, series whose terms are described and ordered in some way but which do not converge apriori) and, secondly, to study the convergence or summability of these formal solutions (the existence and uniqueness of actual solutions with the given asymptotic expansion in a certain domain). In this paper we deal only with the convergence of formal functional series having the form of an infinite sum of power functions with (complex, in general) power exponents and satisfying analytical functional equations of the following three types: a differential, q-difference or Mahler equation. 2 authors · Dec 1, 2024
15 Self-Supervised Learning with Lie Symmetries for Partial Differential Equations Machine learning for differential equations paves the way for computationally efficient alternatives to numerical solvers, with potentially broad impacts in science and engineering. Though current algorithms typically require simulated training data tailored to a given setting, one may instead wish to learn useful information from heterogeneous sources, or from real dynamical systems observations that are messy or incomplete. In this work, we learn general-purpose representations of PDEs from heterogeneous data by implementing joint embedding methods for self-supervised learning (SSL), a framework for unsupervised representation learning that has had notable success in computer vision. Our representation outperforms baseline approaches to invariant tasks, such as regressing the coefficients of a PDE, while also improving the time-stepping performance of neural solvers. We hope that our proposed methodology will prove useful in the eventual development of general-purpose foundation models for PDEs. 6 authors · Jul 11, 2023 1
- On Signs of eigenvalues of Modular forms satisfying Ramanujan Conjecture Let F in S_{k_1}(Gamma^{(2)}(N_1)) and G in S_{k_2}(Gamma^{(2)}(N_2)) be two Siegel cusp forms over the congruence subgroups Gamma^{(2)}(N_1) and Gamma^{(2)}(N_2) respectively. Assume that they are Hecke eigenforms in different eigenspaces and satisfy the Generalized Ramanujan Conjecture. Let lambda_F(p) denote the eigenvalue of F with respect to the Hecke operator T(p). In this article, we compute a lower bound for the density of the set of primes, { p : lambda_F(p) lambda_G(p) < 0 }. 1 authors · Dec 12, 2024
- A PINN Approach to Symbolic Differential Operator Discovery with Sparse Data Given ample experimental data from a system governed by differential equations, it is possible to use deep learning techniques to construct the underlying differential operators. In this work we perform symbolic discovery of differential operators in a situation where there is sparse experimental data. This small data regime in machine learning can be made tractable by providing our algorithms with prior information about the underlying dynamics. Physics Informed Neural Networks (PINNs) have been very successful in this regime (reconstructing entire ODE solutions using only a single point or entire PDE solutions with very few measurements of the initial condition). We modify the PINN approach by adding a neural network that learns a representation of unknown hidden terms in the differential equation. The algorithm yields both a surrogate solution to the differential equation and a black-box representation of the hidden terms. These hidden term neural networks can then be converted into symbolic equations using symbolic regression techniques like AI Feynman. In order to achieve convergence of these neural networks, we provide our algorithms with (noisy) measurements of both the initial condition as well as (synthetic) experimental data obtained at later times. We demonstrate strong performance of this approach even when provided with very few measurements of noisy data in both the ODE and PDE regime. 3 authors · Dec 8, 2022
- On κ-solutions and canonical neighborhoods in 4d Ricci flow We introduce a classification conjecture for kappa-solutions in 4d Ricci flow. Our conjectured list includes known examples from the literature, but also a new 1-parameter family of Z_2^2times O_3-symmetric bubble-sheet ovals that we construct. We observe that some special cases of the conjecture follow from recent results in the literature. We also introduce a stronger variant of the classification conjecture for ancient asymptotically cylindrical 4d Ricci flows, which does not assume smoothness and nonnegative curvature operator a priori. Assuming this stronger variant holds true, we establish a canonical neighborhood theorem for 4d Ricci flow through cylindrical singularities, which shares some elements in common with Perelman's canonical neighborhood theorem for 3d Ricci flow as well as the mean-convex neighborhood theorem for mean curvature flow through neck-singularities. Finally, we argue that quotient-necks lead to new phenomena, and sketch an example of non-uniqueness for 4d Ricci flow through singularities. 1 authors · Aug 2, 2023
- On the Dynamics of Acceleration in First order Gradient Methods Ever since the original algorithm by Nesterov (1983), the true nature of the acceleration phenomenon has remained elusive, with various interpretations of why the method is actually faster. The diagnosis of the algorithm through the lens of Ordinary Differential Equations (ODEs) and the corresponding dynamical system formulation to explain the underlying dynamics has a rich history. In the literature, the ODEs that explain algorithms are typically derived by considering the limiting case of the algorithm maps themselves, that is, an ODE formulation follows the development of an algorithm. This obfuscates the underlying higher order principles and thus provides little evidence of the working of the algorithm. Such has been the case with Nesterov algorithm and the various analogies used to describe the acceleration phenomena, viz, momentum associated with the rolling of a Heavy-Ball down a slope, Hessian damping etc. The main focus of our work is to ideate the genesis of the Nesterov algorithm from the viewpoint of dynamical systems leading to demystifying the mathematical rigour behind the algorithm. Instead of reverse engineering ODEs from discrete algorithms, this work explores tools from the recently developed control paradigm titled Passivity and Immersion approach and the Geometric Singular Perturbation theory which are applied to arrive at the formulation of a dynamical system that explains and models the acceleration phenomena. This perspective helps to gain insights into the various terms present and the sequence of steps used in Nesterovs accelerated algorithm for the smooth strongly convex and the convex case. The framework can also be extended to derive the acceleration achieved using the triple momentum method and provides justifications for the non-convergence to the optimal solution in the Heavy-Ball method. 5 authors · Sep 22
- Lie Group Decompositions for Equivariant Neural Networks Invariance and equivariance to geometrical transformations have proven to be very useful inductive biases when training (convolutional) neural network models, especially in the low-data regime. Much work has focused on the case where the symmetry group employed is compact or abelian, or both. Recent work has explored enlarging the class of transformations used to the case of Lie groups, principally through the use of their Lie algebra, as well as the group exponential and logarithm maps. The applicability of such methods to larger transformation groups is limited by the fact that depending on the group of interest G, the exponential map may not be surjective. Further limitations are encountered when G is neither compact nor abelian. Using the structure and geometry of Lie groups and their homogeneous spaces, we present a framework by which it is possible to work with such groups primarily focusing on the Lie groups G = GL^{+}(n, R) and G = SL(n, R), as well as their representation as affine transformations R^{n} rtimes G. Invariant integration as well as a global parametrization is realized by decomposing the `larger` groups into subgroups and submanifolds which can be handled individually. Under this framework, we show how convolution kernels can be parametrized to build models equivariant with respect to affine transformations. We evaluate the robustness and out-of-distribution generalisation capability of our model on the standard affine-invariant benchmark classification task, where we outperform all previous equivariant models as well as all Capsule Network proposals. 2 authors · Oct 17, 2023
- Minimizing Trajectory Curvature of ODE-based Generative Models Recent ODE/SDE-based generative models, such as diffusion models, rectified flows, and flow matching, define a generative process as a time reversal of a fixed forward process. Even though these models show impressive performance on large-scale datasets, numerical simulation requires multiple evaluations of a neural network, leading to a slow sampling speed. We attribute the reason to the high curvature of the learned generative trajectories, as it is directly related to the truncation error of a numerical solver. Based on the relationship between the forward process and the curvature, here we present an efficient method of training the forward process to minimize the curvature of generative trajectories without any ODE/SDE simulation. Experiments show that our method achieves a lower curvature than previous models and, therefore, decreased sampling costs while maintaining competitive performance. Code is available at https://github.com/sangyun884/fast-ode. 3 authors · Jan 27, 2023
- Surface Patches with Rounded Corners We analyze surface patches with a corner that is rounded in the sense that the partial derivatives at that point are antiparallel. Sufficient conditions for G^1 smoothness are given, which, up to a certain degenerate case, are also necessary. Further, we investigate curvature integrability and present examples 2 authors · Mar 23, 2022
- Correlation functions of degenerate fields in Super-Liouville field theory We study four-point correlation functions of degenerated fields in the NS sector in Super-Liouville field theory. We find integral expressions for these functions using the BPZ equation, and study some superconformal properties of these solutions. Finally, we present the general form for three-point correlation functions. 1 authors · Feb 17
- On the Existence of Solution of Conservation Law with Moving Bottleneck and Discontinuity in FLux In this paper, a PDE-ODE model with discontinuity in the flux as well as a flux constraint is analyzed. A modified Riemann solution is proposed and the existence of a weak solution to the Cauchy problem is rigorously investigated using the wavefront tracking scheme. 2 authors · Sep 30, 2023
- An operator preconditioning perspective on training in physics-informed machine learning In this paper, we investigate the behavior of gradient descent algorithms in physics-informed machine learning methods like PINNs, which minimize residuals connected to partial differential equations (PDEs). Our key result is that the difficulty in training these models is closely related to the conditioning of a specific differential operator. This operator, in turn, is associated to the Hermitian square of the differential operator of the underlying PDE. If this operator is ill-conditioned, it results in slow or infeasible training. Therefore, preconditioning this operator is crucial. We employ both rigorous mathematical analysis and empirical evaluations to investigate various strategies, explaining how they better condition this critical operator, and consequently improve training. 4 authors · Oct 9, 2023
- Specializations of partial differential equations for Feynman integrals Starting from the Mellin-Barnes integral representation of a Feynman integral depending on set of kinematic variables z_i, we derive a system of partial differential equations w.r.t.\ new variables x_j, which parameterize the differentiable constraints z_i=y_i(x_j). In our algorithm, the powers of propagators can be considered as arbitrary parameters. Our algorithm can also be used for the reduction of multiple hypergeometric sums to sums of lower dimension, finding special values and reduction equations of hypergeometric functions in a singular locus of continuous variables, or finding systems of partial differential equations for master integrals with arbitrary powers of propagators. As an illustration, we produce a differential equation of fourth order in one variable for the one-loop two-point Feynman diagram with two different masses and arbitrary propagator powers. 3 authors · Jul 18, 2022
- Deep Learning solutions to singular ordinary differential equations: from special functions to spherical accretion Singular regular points often arise in differential equations describing physical phenomena such as fluid dynamics, electromagnetism, and gravitation. Traditional numerical techniques often fail or become unstable near these points, requiring the use of semi-analytical tools, such as series expansions and perturbative methods, in combination with numerical algorithms; or to invoke more sophisticated methods. In this work, we take an alternative route and leverage the power of machine learning to exploit Physics Informed Neural Networks (PINNs) as a modern approach to solving ordinary differential equations with singular points. PINNs utilize deep learning architectures to approximate solutions by embedding the differential equations into the loss function of the neural network. We discuss the advantages of PINNs in handling singularities, particularly their ability to bypass traditional grid-based methods and provide smooth approximations across irregular regions. Techniques for enhancing the accuracy of PINNs near singular points, such as adaptive loss weighting, are used in order to achieve high efficiency in the training of the network. We exemplify our results by studying four differential equations of interest in mathematics and gravitation -- the Legendre equation, the hypergeometric equation, the solution for black hole space-times in theories of Lorentz violating gravity, and the spherical accretion of a perfect fluid in a Schwarzschild geometry. 3 authors · Sep 30, 2024
- Neural Implicit Surface Evolution This work investigates the use of smooth neural networks for modeling dynamic variations of implicit surfaces under the level set equation (LSE). For this, it extends the representation of neural implicit surfaces to the space-time R^3times R, which opens up mechanisms for continuous geometric transformations. Examples include evolving an initial surface towards general vector fields, smoothing and sharpening using the mean curvature equation, and interpolations of initial conditions. The network training considers two constraints. A data term is responsible for fitting the initial condition to the corresponding time instant, usually R^3 times {0}. Then, a LSE term forces the network to approximate the underlying geometric evolution given by the LSE, without any supervision. The network can also be initialized based on previously trained initial conditions, resulting in faster convergence compared to the standard approach. 6 authors · Jan 24, 2022
- Optimal sources for elliptic PDEs We investigate optimal control problems governed by the elliptic partial differential equation -Delta u=f subject to Dirichlet boundary conditions on a given domain Omega. The control variable in this setting is the right-hand side f, and the objective is to minimize a cost functional that depends simultaneously on the control f and on the associated state function u. We establish the existence of optimal controls and analyze their qualitative properties by deriving necessary conditions for optimality. In particular, when pointwise constraints of the form alphale flebeta are imposed a priori on the control, we examine situations where a {\it bang-bang} phenomenon arises, that is where the optimal control f assumes only the extremal values alpha and beta. More precisely, the control takes the form f=alpha1_E+beta1_{Omegasetminus E}, thereby placing the problem within the framework of shape optimization. Under suitable assumptions, we further establish certain regularity properties for the optimal sets E. Finally, in the last part of the paper, we present numerical simulations that illustrate our theoretical findings through a selection of representative examples. 3 authors · Sep 1
- Lagrangian Flow Networks for Conservation Laws We introduce Lagrangian Flow Networks (LFlows) for modeling fluid densities and velocities continuously in space and time. By construction, the proposed LFlows satisfy the continuity equation, a PDE describing mass conservation in its differentiable form. Our model is based on the insight that solutions to the continuity equation can be expressed as time-dependent density transformations via differentiable and invertible maps. This follows from classical theory of the existence and uniqueness of Lagrangian flows for smooth vector fields. Hence, we model fluid densities by transforming a base density with parameterized diffeomorphisms conditioned on time. The key benefit compared to methods relying on numerical ODE solvers or PINNs is that the analytic expression of the velocity is always consistent with changes in density. Furthermore, we require neither expensive numerical solvers, nor additional penalties to enforce the PDE. LFlows show higher predictive accuracy in density modeling tasks compared to competing models in 2D and 3D, while being computationally efficient. As a real-world application, we model bird migration based on sparse weather radar measurements. 5 authors · May 26, 2023
- Locally Regularized Neural Differential Equations: Some Black Boxes Were Meant to Remain Closed! Implicit layer deep learning techniques, like Neural Differential Equations, have become an important modeling framework due to their ability to adapt to new problems automatically. Training a neural differential equation is effectively a search over a space of plausible dynamical systems. However, controlling the computational cost for these models is difficult since it relies on the number of steps the adaptive solver takes. Most prior works have used higher-order methods to reduce prediction timings while greatly increasing training time or reducing both training and prediction timings by relying on specific training algorithms, which are harder to use as a drop-in replacement due to strict requirements on automatic differentiation. In this manuscript, we use internal cost heuristics of adaptive differential equation solvers at stochastic time points to guide the training toward learning a dynamical system that is easier to integrate. We "close the black-box" and allow the use of our method with any adjoint technique for gradient calculations of the differential equation solution. We perform experimental studies to compare our method to global regularization to show that we attain similar performance numbers without compromising the flexibility of implementation on ordinary differential equations (ODEs) and stochastic differential equations (SDEs). We develop two sampling strategies to trade off between performance and training time. Our method reduces the number of function evaluations to 0.556-0.733x and accelerates predictions by 1.3-2x. 3 authors · Mar 3, 2023
- Adversarial Classification: Necessary conditions and geometric flows We study a version of adversarial classification where an adversary is empowered to corrupt data inputs up to some distance varepsilon, using tools from variational analysis. In particular, we describe necessary conditions associated with the optimal classifier subject to such an adversary. Using the necessary conditions, we derive a geometric evolution equation which can be used to track the change in classification boundaries as varepsilon varies. This evolution equation may be described as an uncoupled system of differential equations in one dimension, or as a mean curvature type equation in higher dimension. In one dimension, and under mild assumptions on the data distribution, we rigorously prove that one can use the initial value problem starting from varepsilon=0, which is simply the Bayes classifier, in order to solve for the global minimizer of the adversarial problem for small values of varepsilon. In higher dimensions we provide a similar result, albeit conditional to the existence of regular solutions of the initial value problem. In the process of proving our main results we obtain a result of independent interest connecting the original adversarial problem with an optimal transport problem under no assumptions on whether classes are balanced or not. Numerical examples illustrating these ideas are also presented. 2 authors · Nov 21, 2020
- DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation Computer graphics, 3D computer vision and robotics communities have produced multiple approaches to representing 3D geometry for rendering and reconstruction. These provide trade-offs across fidelity, efficiency and compression capabilities. In this work, we introduce DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data. DeepSDF, like its classical counterpart, represents a shape's surface by a continuous volumetric field: the magnitude of a point in the field represents the distance to the surface boundary and the sign indicates whether the region is inside (-) or outside (+) of the shape, hence our representation implicitly encodes a shape's boundary as the zero-level-set of the learned function while explicitly representing the classification of space as being part of the shapes interior or not. While classical SDF's both in analytical or discretized voxel form typically represent the surface of a single shape, DeepSDF can represent an entire class of shapes. Furthermore, we show state-of-the-art performance for learned 3D shape representation and completion while reducing the model size by an order of magnitude compared with previous work. 5 authors · Jan 15, 2019
7 LookingGlass: Generative Anamorphoses via Laplacian Pyramid Warping Anamorphosis refers to a category of images that are intentionally distorted, making them unrecognizable when viewed directly. Their true form only reveals itself when seen from a specific viewpoint, which can be through some catadioptric device like a mirror or a lens. While the construction of these mathematical devices can be traced back to as early as the 17th century, they are only interpretable when viewed from a specific vantage point and tend to lose meaning when seen normally. In this paper, we revisit these famous optical illusions with a generative twist. With the help of latent rectified flow models, we propose a method to create anamorphic images that still retain a valid interpretation when viewed directly. To this end, we introduce Laplacian Pyramid Warping, a frequency-aware image warping technique key to generating high-quality visuals. Our work extends Visual Anagrams (arXiv:2311.17919) to latent space models and to a wider range of spatial transforms, enabling the creation of novel generative perceptual illusions. 5 authors · Apr 11 6
- A Group Symmetric Stochastic Differential Equation Model for Molecule Multi-modal Pretraining Molecule pretraining has quickly become the go-to schema to boost the performance of AI-based drug discovery. Naturally, molecules can be represented as 2D topological graphs or 3D geometric point clouds. Although most existing pertaining methods focus on merely the single modality, recent research has shown that maximizing the mutual information (MI) between such two modalities enhances the molecule representation ability. Meanwhile, existing molecule multi-modal pretraining approaches approximate MI based on the representation space encoded from the topology and geometry, thus resulting in the loss of critical structural information of molecules. To address this issue, we propose MoleculeSDE. MoleculeSDE leverages group symmetric (e.g., SE(3)-equivariant and reflection-antisymmetric) stochastic differential equation models to generate the 3D geometries from 2D topologies, and vice versa, directly in the input space. It not only obtains tighter MI bound but also enables prosperous downstream tasks than the previous work. By comparing with 17 pretraining baselines, we empirically verify that MoleculeSDE can learn an expressive representation with state-of-the-art performance on 26 out of 32 downstream tasks. 5 authors · May 28, 2023
- Elliptic genera of two-dimensional N=2 gauge theories with rank-one gauge groups We compute the elliptic genera of two-dimensional N=(2,2) and N=(0,2) gauged linear sigma models via supersymmetric localization, for rank-one gauge groups. The elliptic genus is expressed as a sum over residues of a meromorphic function whose argument is the holonomy of the gauge field along both the spatial and the temporal directions of the torus. We illustrate our formulas by a few examples including the quintic Calabi-Yau, N=(2,2) SU(2) and O(2) gauge theories coupled to N fundamental chiral multiplets, and a geometric N=(0,2) model. 4 authors · May 2, 2013
- Reverse Derivative Ascent: A Categorical Approach to Learning Boolean Circuits We introduce Reverse Derivative Ascent: a categorical analogue of gradient based methods for machine learning. Our algorithm is defined at the level of so-called reverse differential categories. It can be used to learn the parameters of models which are expressed as morphisms of such categories. Our motivating example is boolean circuits: we show how our algorithm can be applied to such circuits by using the theory of reverse differential categories. Note our methodology allows us to learn the parameters of boolean circuits directly, in contrast to existing binarised neural network approaches. Moreover, we demonstrate its empirical value by giving experimental results on benchmark machine learning datasets. 2 authors · Jan 25, 2021
- Simplifying Momentum-based Positive-definite Submanifold Optimization with Applications to Deep Learning Riemannian submanifold optimization with momentum is computationally challenging because, to ensure that the iterates remain on the submanifold, we often need to solve difficult differential equations. Here, we simplify such difficulties for a class of structured symmetric positive-definite matrices with the affine-invariant metric. We do so by proposing a generalized version of the Riemannian normal coordinates that dynamically orthonormalizes the metric and locally converts the problem into an unconstrained problem in the Euclidean space. We use our approach to simplify existing approaches for structured covariances and develop matrix-inverse-free 2^nd-order optimizers for deep learning in low precision settings. Code: https://github.com/yorkerlin/StructuredNGD-DL 6 authors · Feb 19, 2023
- A Lie Group Approach to Riemannian Batch Normalization Manifold-valued measurements exist in numerous applications within computer vision and machine learning. Recent studies have extended Deep Neural Networks (DNNs) to manifolds, and concomitantly, normalization techniques have also been adapted to several manifolds, referred to as Riemannian normalization. Nonetheless, most of the existing Riemannian normalization methods have been derived in an ad hoc manner and only apply to specific manifolds. This paper establishes a unified framework for Riemannian Batch Normalization (RBN) techniques on Lie groups. Our framework offers the theoretical guarantee of controlling both the Riemannian mean and variance. Empirically, we focus on Symmetric Positive Definite (SPD) manifolds, which possess three distinct types of Lie group structures. Using the deformation concept, we generalize the existing Lie groups on SPD manifolds into three families of parameterized Lie groups. Specific normalization layers induced by these Lie groups are then proposed for SPD neural networks. We demonstrate the effectiveness of our approach through three sets of experiments: radar recognition, human action recognition, and electroencephalography (EEG) classification. The code is available at https://github.com/GitZH-Chen/LieBN.git. 4 authors · Mar 17, 2024
- Multiplicities of Eigenvalues of the Diffusion Operator with Random Jumps from the Boundary This paper deals with a non-self-adjoint differential operator which is associated with a diffusion process with random jumps from the boundary. Our main result is that the algebraic multiplicity of an eigenvalue is equal to its order as a zero of the characteristic function Delta(lambda) . This can be used to determine the multiplicities of eigenvalues for concrete operators. 2 authors · Jan 31, 2018
- Smooth Normalizing Flows Normalizing flows are a promising tool for modeling probability distributions in physical systems. While state-of-the-art flows accurately approximate distributions and energies, applications in physics additionally require smooth energies to compute forces and higher-order derivatives. Furthermore, such densities are often defined on non-trivial topologies. A recent example are Boltzmann Generators for generating 3D-structures of peptides and small proteins. These generative models leverage the space of internal coordinates (dihedrals, angles, and bonds), which is a product of hypertori and compact intervals. In this work, we introduce a class of smooth mixture transformations working on both compact intervals and hypertori. Mixture transformations employ root-finding methods to invert them in practice, which has so far prevented bi-directional flow training. To this end, we show that parameter gradients and forces of such inverses can be computed from forward evaluations via the inverse function theorem. We demonstrate two advantages of such smooth flows: they allow training by force matching to simulation data and can be used as potentials in molecular dynamics simulations. 3 authors · Oct 1, 2021
- Geometric Clifford Algebra Networks We propose Geometric Clifford Algebra Networks (GCANs) for modeling dynamical systems. GCANs are based on symmetry group transformations using geometric (Clifford) algebras. We first review the quintessence of modern (plane-based) geometric algebra, which builds on isometries encoded as elements of the Pin(p,q,r) group. We then propose the concept of group action layers, which linearly combine object transformations using pre-specified group actions. Together with a new activation and normalization scheme, these layers serve as adjustable geometric templates that can be refined via gradient descent. Theoretical advantages are strongly reflected in the modeling of three-dimensional rigid body transformations as well as large-scale fluid dynamics simulations, showing significantly improved performance over traditional methods. 5 authors · Feb 13, 2023
- The generalized roof F(1,2,n): Hodge structures and derived categories We consider generalized homogeneous roofs, i.e. quotients of simply connected, semisimple Lie groups by a parabolic subgroup, which admit two projective bundle structures. Given a general hyperplane section on such a variety, we consider the zero loci of its pushforwards along the projective bundle structures and we discuss their properties at the level of Hodge structures. In the case of the flag variety F(1,2,n) with its projections to P^{n-1} and G(2, n), we construct a derived embedding of the relevant zero loci by methods based on the study of B-brane categories in the context of a gauged linear sigma model. 4 authors · Oct 20, 2021
- Higher Order Automatic Differentiation of Higher Order Functions We present semantic correctness proofs of automatic differentiation (AD). We consider a forward-mode AD method on a higher order language with algebraic data types, and we characterise it as the unique structure preserving macro given a choice of derivatives for basic operations. We describe a rich semantics for differentiable programming, based on diffeological spaces. We show that it interprets our language, and we phrase what it means for the AD method to be correct with respect to this semantics. We show that our characterisation of AD gives rise to an elegant semantic proof of its correctness based on a gluing construction on diffeological spaces. We explain how this is, in essence, a logical relations argument. Throughout, we show how the analysis extends to AD methods for computing higher order derivatives using a Taylor approximation. 3 authors · Jan 17, 2021
1 Distilling ODE Solvers of Diffusion Models into Smaller Steps Distillation techniques have substantially improved the sampling speed of diffusion models, allowing of the generation within only one step or a few steps. However, these distillation methods require extensive training for each dataset, sampler, and network, which limits their practical applicability. To address this limitation, we propose a straightforward distillation approach, Distilled-ODE solvers (D-ODE solvers), that optimizes the ODE solver rather than training the denoising network. D-ODE solvers are formulated by simply applying a single parameter adjustment to existing ODE solvers. Subsequently, D-ODE solvers with smaller steps are optimized by ODE solvers with larger steps through distillation over a batch of samples. Our comprehensive experiments indicate that D-ODE solvers outperform existing ODE solvers, including DDIM, PNDM, DPM-Solver, DEIS, and EDM, especially when generating samples with fewer steps. Our method incur negligible computational overhead compared to previous distillation techniques, enabling simple and rapid integration with previous samplers. Qualitative analysis further shows that D-ODE solvers enhance image quality while preserving the sampling trajectory of ODE solvers. 3 authors · Sep 28, 2023
2 All Weight Systems for Calabi-Yau Fourfolds from Reflexive Polyhedra For any given dimension d, all reflexive d-polytopes can be found (in principle) as subpolytopes of a number of maximal polyhedra that are defined in terms of (d+1)-tuples of integers (weights), or combinations of k-tuples of weights with k<d+1. We present the results of a complete classification of sextuples of weights pertaining to the construction of all reflexive polytopes in five dimensions. We find 322 383 760 930 such weight systems. 185 269 499 015 of them give rise directly to reflexive polytopes and thereby to mirror pairs of Calabi-Yau fourfolds. These lead to 532 600 483 distinct sets of Hodge numbers. 2 authors · Aug 7, 2018
- Learning the Dynamics of Sparsely Observed Interacting Systems We address the problem of learning the dynamics of an unknown non-parametric system linking a target and a feature time series. The feature time series is measured on a sparse and irregular grid, while we have access to only a few points of the target time series. Once learned, we can use these dynamics to predict values of the target from the previous values of the feature time series. We frame this task as learning the solution map of a controlled differential equation (CDE). By leveraging the rich theory of signatures, we are able to cast this non-linear problem as a high-dimensional linear regression. We provide an oracle bound on the prediction error which exhibits explicit dependencies on the individual-specific sampling schemes. Our theoretical results are illustrated by simulations which show that our method outperforms existing algorithms for recovering the full time series while being computationally cheap. We conclude by demonstrating its potential on real-world epidemiological data. 4 authors · Jan 27, 2023
- Towards Hierarchical Rectified Flow We formulate a hierarchical rectified flow to model data distributions. It hierarchically couples multiple ordinary differential equations (ODEs) and defines a time-differentiable stochastic process that generates a data distribution from a known source distribution. Each ODE resembles the ODE that is solved in a classic rectified flow, but differs in its domain, i.e., location, velocity, acceleration, etc. Unlike the classic rectified flow formulation, which formulates a single ODE in the location domain and only captures the expected velocity field (sufficient to capture a multi-modal data distribution), the hierarchical rectified flow formulation models the multi-modal random velocity field, acceleration field, etc., in their entirety. This more faithful modeling of the random velocity field enables integration paths to intersect when the underlying ODE is solved during data generation. Intersecting paths in turn lead to integration trajectories that are more straight than those obtained in the classic rectified flow formulation, where integration paths cannot intersect. This leads to modeling of data distributions with fewer neural function evaluations. We empirically verify this on synthetic 1D and 2D data as well as MNIST, CIFAR-10, and ImageNet-32 data. Our code is available at: https://riccizz.github.io/HRF/. 4 authors · Feb 24
- Geometry aware inference of steady state PDEs using Equivariant Neural Fields representations Recent advances in Neural Fields have enabled powerful, discretization-invariant methods for learning neural operators that approximate solutions of Partial Differential Equations (PDEs) on general geometries. Building on these developments, we introduce enf2enf, an encoder--decoder methodology for predicting steady-state Partial Differential Equations with non-parameterized geometric variability, based on recently proposed Equivariant Neural Field architectures. In enf2enf, input geometries are encoded into latent point cloud embeddings that inherently preserve geometric grounding and capture local phenomena. The resulting representations are then combined with global parameters and directly decoded into continuous output fields, thus efficiently modeling the coupling between geometry and physics. By leveraging the inductive biases of locality and translation invariance, our approach is able to capture fine-scale physical features as well as complex shape variations, thereby enhancing generalization and physical compliance. Extensive experiments on a high-fidelity aerodynamic dataset, a hyper-elastic material benchmark, and multi-element airfoil geometries, demonstrate that the proposed model achieves superior or competitive performance compared to state-of-the-art graph based, operator learning, and neural field methods. Notably, our method supports real time inference and zero-shot super-resolution, enabling efficient training on low-resolution meshes while maintaining high accuracy on full-scale discretizations. 5 authors · Apr 24
- One-connection rule for structural equation models Linear structural equation models are multivariate statistical models encoded by mixed graphs. In particular, the set of covariance matrices for distributions belonging to a linear structural equation model for a fixed mixed graph G=(V, D,B) is parameterized by a rational function with parameters for each vertex and edge in G. This rational parametrization naturally allows for the study of these models from an algebraic and combinatorial point of view. Indeed, this point of view has led to a collection of results in the literature, mainly focusing on questions related to identifiability and determining relationships between covariances (i.e., finding polynomials in the Gaussian vanishing ideal). So far, a large proportion of these results has focused on the case when D, the directed part of the mixed graph G, is acyclic. This is due to the fact that in the acyclic case, the parametrization becomes polynomial and there is a description of the entries of the covariance matrices in terms of a finite sum. We move beyond the acyclic case and give a closed form expression for the entries of the covariance matrices in terms of the one-connections in a graph obtained from D through some small operations. This closed form expression then allows us to show that if G is simple, then the parametrization map is generically finite-to-one. Finally, having a closed form expression for the covariance matrices allows for the development of an algorithm for systematically exploring possible polynomials in the Gaussian vanishing ideal. 4 authors · Oct 1, 2022
- Approximate Axiomatization for Differentially-Defined Functions This article establishes a complete approximate axiomatization for the real-closed field R expanded with all differentially-defined functions, including special functions such as sin(x), cos(x), e^x, dots. Every true sentence is provable up to some numerical approximation, and the truth of such approximations converge under mild conditions. Such an axiomatization is a fragment of the axiomatization for differential dynamic logic, and is therefore a finite extension of the axiomatization of real-closed fields. Furthermore, the numerical approximations approximate formulas containing special function symbols by FOL_{R} formulas, improving upon earlier decidability results only concerning closed sentences. 2 authors · Jun 9
- Neural Network Approximations of PDEs Beyond Linearity: A Representational Perspective A burgeoning line of research leverages deep neural networks to approximate the solutions to high dimensional PDEs, opening lines of theoretical inquiry focused on explaining how it is that these models appear to evade the curse of dimensionality. However, most prior theoretical analyses have been limited to linear PDEs. In this work, we take a step towards studying the representational power of neural networks for approximating solutions to nonlinear PDEs. We focus on a class of PDEs known as nonlinear elliptic variational PDEs, whose solutions minimize an Euler-Lagrange energy functional E(u) = int_Omega L(x, u(x), nabla u(x)) - f(x) u(x)dx. We show that if composing a function with Barron norm b with partial derivatives of L produces a function of Barron norm at most B_L b^p, the solution to the PDE can be epsilon-approximated in the L^2 sense by a function with Barron norm Oleft(left(dB_Lright)^{max{p log(1/ epsilon), p^{log(1/epsilon)}}}right). By a classical result due to Barron [1993], this correspondingly bounds the size of a 2-layer neural network needed to approximate the solution. Treating p, epsilon, B_L as constants, this quantity is polynomial in dimension, thus showing neural networks can evade the curse of dimensionality. Our proof technique involves neurally simulating (preconditioned) gradient in an appropriate Hilbert space, which converges exponentially fast to the solution of the PDE, and such that we can bound the increase of the Barron norm at each iterate. Our results subsume and substantially generalize analogous prior results for linear elliptic PDEs over a unit hypercube. 4 authors · Oct 21, 2022
- Regularity of shadows and the geometry of the singular set associated to a Monge-Ampere equation Illuminating the surface of a convex body with parallel beams of light in a given direction generates a shadow region. We prove sharp regularity results for the boundary of this shadow in every direction of illumination. Moreover, techniques are developed for investigating the regularity of the region generated by orthogonally projecting a convex set onto another. As an application we study the geometry and Hausdorff dimension of the singular set corresponding to a Monge-Ampere equation. 2 authors · Nov 22, 2013
8 Einstein Fields: A Neural Perspective To Computational General Relativity We introduce Einstein Fields, a neural representation that is designed to compress computationally intensive four-dimensional numerical relativity simulations into compact implicit neural network weights. By modeling the metric, which is the core tensor field of general relativity, Einstein Fields enable the derivation of physical quantities via automatic differentiation. However, unlike conventional neural fields (e.g., signed distance, occupancy, or radiance fields), Einstein Fields are Neural Tensor Fields with the key difference that when encoding the spacetime geometry of general relativity into neural field representations, dynamics emerge naturally as a byproduct. Einstein Fields show remarkable potential, including continuum modeling of 4D spacetime, mesh-agnosticity, storage efficiency, derivative accuracy, and ease of use. We address these challenges across several canonical test beds of general relativity and release an open source JAX-based library, paving the way for more scalable and expressive approaches to numerical relativity. Code is made available at https://github.com/AndreiB137/EinFields 4 authors · Jul 15 1
- Variational integrals on Hessian spaces: partial regularity for critical points We develop regularity theory for critical points of variational integrals defined on Hessian spaces of functions on open, bounded subdomains of R^n, under compactly supported variations. The critical point solves a fourth order nonlinear equation in double divergence form. We show that for smooth convex functionals, a W^{2,infty} critical point with bounded Hessian is smooth provided that its Hessian has a small bounded mean oscillation (BMO). We deduce that the interior singular set of a critical point has Hausdorff dimension at most n-p_0, for some p_0 in (2,3). We state some applications of our results to variational problems in Lagrangian geometry. Finally, we use the Hamiltonian stationary equation to demonstrate the importance of our assumption on the a priori regularity of the critical point. 2 authors · Jul 3, 2023
- On the Correctness of Automatic Differentiation for Neural Networks with Machine-Representable Parameters Recent work has shown that forward- and reverse- mode automatic differentiation (AD) over the reals is almost always correct in a mathematically precise sense. However, actual programs work with machine-representable numbers (e.g., floating-point numbers), not reals. In this paper, we study the correctness of AD when the parameter space of a neural network consists solely of machine-representable numbers. In particular, we analyze two sets of parameters on which AD can be incorrect: the incorrect set on which the network is differentiable but AD does not compute its derivative, and the non-differentiable set on which the network is non-differentiable. For a neural network with bias parameters, we first prove that the incorrect set is always empty. We then prove a tight bound on the size of the non-differentiable set, which is linear in the number of non-differentiabilities in activation functions, and give a simple necessary and sufficient condition for a parameter to be in this set. We further prove that AD always computes a Clarke subderivative even on the non-differentiable set. We also extend these results to neural networks possibly without bias parameters. 3 authors · Jan 30, 2023
- New type of solutions for a critical Grushin-type problem with competing potentials In this paper, we consider a critical Grushin-type problem with double potentials. By applying the reduction argument and local Pohozaev identities, we construct a new family of solutions to this problem, which are concentrated at points lying on the top and the bottom circles of a cylinder. 2 authors · Jun 29, 2024
- Principal Landau Determinants We reformulate the Landau analysis of Feynman integrals with the aim of advancing the state of the art in modern particle-physics computations. We contribute new algorithms for computing Landau singularities, using tools from polyhedral geometry and symbolic/numerical elimination. Inspired by the work of Gelfand, Kapranov, and Zelevinsky (GKZ) on generalized Euler integrals, we define the principal Landau determinant of a Feynman diagram. We illustrate with a number of examples that this algebraic formalism allows to compute many components of the Landau singular locus. We adapt the GKZ framework by carefully specializing Euler integrals to Feynman integrals. For instance, ultraviolet and infrared singularities are detected as irreducible components of an incidence variety, which project dominantly to the kinematic space. We compute principal Landau determinants for the infinite families of one-loop and banana diagrams with different mass configurations, and for a range of cutting-edge Standard Model processes. Our algorithms build on the Julia package Landau.jl and are implemented in the new open-source package PLD.jl available at https://mathrepo.mis.mpg.de/PLD/. 3 authors · Nov 27, 2023
- On Loewner energy and curve composition The composition gamma circ eta of Jordan curves gamma and eta in universal Teichm\"uller space is defined through the composition h_gamma circ h_eta of their conformal weldings. We show that whenever gamma and eta are curves of finite Loewner energy I^L, the energy of the composition satisfies $I^L(gamma circ eta) lesssim_K I^L(gamma) + I^L(eta), with an explicit constant in terms of the quasiconformal K of \gamma and \eta. We also study the asymptotic growth rate of the Loewner energy under n self-compositions \gamma^n := \gamma \circ \cdots \circ \gamma, showing limsup_{n rightarrow infty} 1{n}log I^L(gamma^n) lesssim_K 1, again with explicit constant. Our approach is to define a new conformally-covariant rooted welding functional W_h(y), and show W_h(y) \asymp_K I^L(\gamma) when h is a welding of \gamma and y is any root (a point in the domain of h). In the course of our arguments we also give several new expressions for the Loewner energy, including generalized formulas in terms of the Riemann maps f and g for \gamma which hold irrespective of the placement of \gamma on the Riemann sphere, the normalization of f and g, and what disks D, D^c \subset \mathbb{C} serve as domains. An additional corollary is that I^L(\gamma) is bounded above by a constant only depending on the Weil--Petersson distance from \gamma$ to the circle. 2 authors · May 6
- Volumes of Nullhomotopies in Nilpotent Spaces The Shadowing Principle of Manin has proved a valuable tool for addressing questions of quantitative topology raised by Gromov in the late 1900s. The principle informally provides a way for bounded algebraic maps between differential graded algebras to be translated into nearby genuine maps between their geometric realizations. We extend this principle to finite towers of principal K(G,n) fibrations, and in particular apply this construction to nilpotent spaces. As a specific application of the extended principle, we provide upper bounds on the asymptotic behavior of volumes of nullhomotopies of Lipschitz maps into nilpotent spaces. We further refine these bounds in the case when c = 1 to nearly meet those of the simply connected setting. We similarly refine these bounds in the event the target space is coformal, and demonstrate that the bounds in this setting are nearly sharp. 1 authors · Sep 30
- Differentiable Causal Computations via Delayed Trace We investigate causal computations taking sequences of inputs to sequences of outputs where the nth output depends on the first n inputs only. We model these in category theory via a construction taking a Cartesian category C to another category St(C) with a novel trace-like operation called "delayed trace", which misses yanking and dinaturality axioms of the usual trace. The delayed trace operation provides a feedback mechanism in St(C) with an implicit guardedness guarantee. When C is equipped with a Cartesian differential operator, we construct a differential operator for St(C) using an abstract version of backpropagation through time, a technique from machine learning based on unrolling of functions. This obtains a swath of properties for backpropagation through time, including a chain rule and Schwartz theorem. Our differential operator is also able to compute the derivative of a stateful network without requiring the network to be unrolled. 2 authors · Mar 4, 2019
- Cusps and Commensurability Classes of Hyperbolic 4-Manifolds There are six orientable, compact, flat 3-manifolds that can occur as cusp cross-sections of hyperbolic 4-manifolds. This paper provides criteria for exactly when a given commensurability class of arithmetic hyperbolic 4-manifolds contains a representative with a given cusp type. In particular, for three of the six cusp types, we provide infinitely many examples of commensurability classes that contain no manifolds with cusps of the given type; no such examples were previously known for any cusp type. 1 authors · Sep 24, 2021
- An open-closed Deligne-Mumford field theory associated to a Lagrangian submanifold Let L subset X be a compact embedded Lagrangian in a compact symplectic manifold. We present the moduli spaces of holomorphic maps of arbitrary genus with boundary on L as a global Kuranishi chart, generalising the work of Abouzaid-McLean-Smith and Hirschi-Swaminathan. We use this to define an open-closed Deligne-Mumford theory whose open genus zero part is the Fukaya A_infty algebra associated to L, and whose closed part gives the Gromov--Witten theory of X. Combined with results of Costello, this has applications in obtaining Gromov--Witten invariants from the Fukaya category. 2 authors · Jan 8
- Open Gromov-Witten theory on Calabi-Yau three-folds I We propose a general theory of the Open Gromov-Witten invariant on Calabi-Yau three-folds. We introduce the moduli space of multi-curves and show how it leads to invariants. Our construction is based on an idea of Witten. In the special case that each connected component of the Lagrangian submanifold has the rational homology of a sphere we define rational numbers F_{g,h} for each genus g and h boundary components. 1 authors · Jul 29, 2009
- Learners' Languages In "Backprop as functor", the authors show that the fundamental elements of deep learning -- gradient descent and backpropagation -- can be conceptualized as a strong monoidal functor Para(Euc)toLearn from the category of parameterized Euclidean spaces to that of learners, a category developed explicitly to capture parameter update and backpropagation. It was soon realized that there is an isomorphism LearncongPara(Slens), where Slens is the symmetric monoidal category of simple lenses as used in functional programming. In this note, we observe that Slens is a full subcategory of Poly, the category of polynomial functors in one variable, via the functor Amapsto Ay^A. Using the fact that (Poly,otimes) is monoidal closed, we show that a map Ato B in Para(Slens) has a natural interpretation in terms of dynamical systems (more precisely, generalized Moore machines) whose interface is the internal-hom type [Ay^A,By^B]. Finally, we review the fact that the category p-Coalg of dynamical systems on any p in Poly forms a topos, and consider the logical propositions that can be stated in its internal language. We give gradient descent as an example, and we conclude by discussing some directions for future work. 1 authors · Mar 1, 2021
1 RotaTouille: Rotation Equivariant Deep Learning for Contours Contours or closed planar curves are common in many domains. For example, they appear as object boundaries in computer vision, isolines in meteorology, and the orbits of rotating machinery. In many cases when learning from contour data, planar rotations of the input will result in correspondingly rotated outputs. It is therefore desirable that deep learning models be rotationally equivariant. In addition, contours are typically represented as an ordered sequence of edge points, where the choice of starting point is arbitrary. It is therefore also desirable for deep learning methods to be equivariant under cyclic shifts. We present RotaTouille, a deep learning framework for learning from contour data that achieves both rotation and cyclic shift equivariance through complex-valued circular convolution. We further introduce and characterize equivariant non-linearities, coarsening layers, and global pooling layers to obtain invariant representations for downstream tasks. Finally, we demonstrate the effectiveness of RotaTouille through experiments in shape classification, reconstruction, and contour regression. 2 authors · Aug 22 2
- LE-PDE++: Mamba for accelerating PDEs Simulations Partial Differential Equations are foundational in modeling science and natural systems such as fluid dynamics and weather forecasting. The Latent Evolution of PDEs method is designed to address the computational intensity of classical and deep learning-based PDE solvers by proposing a scalable and efficient alternative. To enhance the efficiency and accuracy of LE-PDE, we incorporate the Mamba model, an advanced machine learning model known for its predictive efficiency and robustness in handling complex dynamic systems with a progressive learning strategy. The LE-PDE was tested on several benchmark problems. The method demonstrated a marked reduction in computational time compared to traditional solvers and standalone deep learning models while maintaining high accuracy in predicting system behavior over time. Our method doubles the inference speed compared to the LE-PDE while retaining the same level of parameter efficiency, making it well-suited for scenarios requiring long-term predictions. 6 authors · Nov 4, 2024
1 Incorporating Riemannian Geometric Features for Learning Coefficient of Pressure Distributions on Airplane Wings The aerodynamic coefficients of aircrafts are significantly impacted by its geometry, especially when the angle of attack (AoA) is large. In the field of aerodynamics, traditional polynomial-based parameterization uses as few parameters as possible to describe the geometry of an airfoil. However, because the 3D geometry of a wing is more complicated than the 2D airfoil, polynomial-based parameterizations have difficulty in accurately representing the entire shape of a wing in 3D space. Existing deep learning-based methods can extract massive latent neural representations for the shape of 2D airfoils or 2D slices of wings. Recent studies highlight that directly taking geometric features as inputs to the neural networks can improve the accuracy of predicted aerodynamic coefficients. Motivated by geometry theory, we propose to incorporate Riemannian geometric features for learning Coefficient of Pressure (CP) distributions on wing surfaces. Our method calculates geometric features (Riemannian metric, connection, and curvature) and further inputs the geometric features, coordinates and flight conditions into a deep learning model to predict the CP distribution. Experimental results show that our method, compared to state-of-the-art Deep Attention Network (DAN), reduces the predicted mean square error (MSE) of CP by an average of 8.41% for the DLR-F11 aircraft test set. 4 authors · Dec 22, 2023
- The simple essence of automatic differentiation Automatic differentiation (AD) in reverse mode (RAD) is a central component of deep learning and other uses of large-scale optimization. Commonly used RAD algorithms such as backpropagation, however, are complex and stateful, hindering deep understanding, improvement, and parallel execution. This paper develops a simple, generalized AD algorithm calculated from a simple, natural specification. The general algorithm is then specialized by varying the representation of derivatives. In particular, applying well-known constructions to a naive representation yields two RAD algorithms that are far simpler than previously known. In contrast to commonly used RAD implementations, the algorithms defined here involve no graphs, tapes, variables, partial derivatives, or mutation. They are inherently parallel-friendly, correct by construction, and usable directly from an existing programming language with no need for new data types or programming style, thanks to use of an AD-agnostic compiler plugin. 1 authors · Apr 2, 2018
- An elasticity-based mesh morphing technique with application to reduced-order modeling The aim of this article is to introduce a new methodology for constructing morphings between shapes that have identical topology. This morphing is obtained by deforming a reference shape, through the resolution of a sequence of linear elasticity equations, onto the target shape. In particular, our approach does not assume any knowledge of a boundary parametrization. Furthermore, we demonstrate how constraints can be imposed on specific points, lines and surfaces in the reference domain to ensure alignment with their counterparts in the target domain after morphing. Additionally, we show how the proposed methodology can be integrated in an offline and online paradigm, which is useful in reduced-order modeling scenarii involving variable shapes. This framework facilitates the efficient computation of the morphings in various geometric configurations, thus improving the versatility and applicability of the approach. The methodology is illustrated on the regression problem of the drag and lift coefficients of airfoils of non-parameterized variable shapes. 5 authors · Jul 2, 2024
- Generating functions for some series of characters of classical Lie groups There exist a number of well known multiplicative generating functions for series of Schur functions. Amongst these are some related to the dual Cauchy identity whose expansion coefficients are rather simple, and in some cases periodic in parameters specifying the Schur functions. More recently similar identities have been found involving expansions in terms of characters of the symplectic group. Here these results are extended and generalised to all classical Lie groups. This is done through the derivation of explicit recurrence relations for the expansion coefficients based on the action of the Weyl groups of both the symplectic and orthogonal groups. Copious results are tabulated in the form of explicit values of the expansion coefficients as functions of highest weight parameters. An alternative approach is then based on dual pairs of symplectic and/or orthogonal groups. A byproduct of this approach is that expansions in terms of spin orthogonal group characters can always be recovered from non-spin cases. 1 authors · Mar 1, 2023